Dissociable Frontal Controls during Visible and Memory-guided Eye-Tracking of Moving Targets
Ding, Jinhong; Powell, David; Jiang, Yang
2009-01-01
When tracking visible or occluded moving targets, several frontal regions including the frontal eye fields (FEF), dorsal-lateral prefrontal cortex (DLPFC), and Anterior Cingulate Cortex (ACC) are involved in smooth pursuit eye movements (SPEM). To investigate how these areas play different roles in predicting future locations of moving targets, twelve healthy college students participated in a smooth pursuit task of visual and occluded targets. Their eye movements and brain responses measured by event-related functional MRI were simultaneously recorded. Our results show that different visual cues resulted in time discrepancies between physical and estimated pursuit time only when the moving dot was occluded. Visible phase velocity gain was higher than that of occlusion phase. We found bilateral FEF association with eye-movement whether moving targets are visible or occluded. However, the DLPFC and ACC showed increased activity when tracking and predicting locations of occluded moving targets, and were suppressed during smooth pursuit of visible targets. When visual cues were increasingly available, less activation in the DLPFC and the ACC was observed. Additionally, there was a significant hemisphere effect in DLPFC, where right DLPFC showed significantly increased responses over left when pursuing occluded moving targets. Correlation results revealed that DLPFC, the right DLPFC in particular, communicates more with FEF during tracking of occluded moving targets (from memory). The ACC modulates FEF more during tracking of visible targets (likely related to visual attention). Our results suggest that DLPFC and ACC modulate FEF and cortical networks differentially during visible and memory-guided eye tracking of moving targets. PMID:19434603
Insect Detection of Small Targets Moving in Visual Clutter
Barnett, Paul D; O'Carroll, David C
2006-01-01
Detection of targets that move within visual clutter is a common task for animals searching for prey or conspecifics, a task made even more difficult when a moving pursuer needs to analyze targets against the motion of background texture (clutter). Despite the limited optical acuity of the compound eye of insects, this challenging task seems to have been solved by their tiny visual system. Here we describe neurons found in the male hoverfly,Eristalis tenax, that respond selectively to small moving targets. Although many of these target neurons are inhibited by the motion of a background pattern, others respond to target motion within the receptive field under a surprisingly large range of background motion stimuli. Some neurons respond whether or not there is a speed differential between target and background. Analysis of responses to very small targets (smaller than the size of the visual field of single photoreceptors) or those targets with reduced contrast shows that these neurons have extraordinarily high contrast sensitivity. Our data suggest that rejection of background motion may result from extreme selectivity for small targets contrasting against local patches of the background, combined with this high sensitivity, such that background patterns rarely contain features that satisfactorily drive the neuron. PMID:16448249
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
Perceptual integration of motion and form information: evidence of parallel-continuous processing.
von Mühlenen, A; Müller, H J
2000-04-01
In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989).
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
Hybrid foraging search: Searching for multiple instances of multiple types of target.
Wolfe, Jeremy M; Aizenman, Avigael M; Boettcher, Sage E P; Cain, Matthew S
2016-02-01
This paper introduces the "hybrid foraging" paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8-64 target objects in memory. They viewed displays of 60-105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25-33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hybrid foraging search: Searching for multiple instances of multiple types of target
Wolfe, Jeremy M.; Aizenman, Avigael M.; Boettcher, Sage E.P.; Cain, Matthew S.
2016-01-01
This paper introduces the “hybrid foraging” paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8–64 targets objects in memory. They viewed displays of 60–105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25–33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. PMID:26731644
fMRI evidence for sensorimotor transformations in human cortex during smooth pursuit eye movements.
Kimmig, H; Ohlendorf, S; Speck, O; Sprenger, A; Rutschmann, R M; Haller, S; Greenlee, M W
2008-01-01
Smooth pursuit eye movements (SP) are driven by moving objects. The pursuit system processes the visual input signals and transforms this information into an oculomotor output signal. Despite the object's movement on the retina and the eyes' movement in the head, we are able to locate the object in space implying coordinate transformations from retinal to head and space coordinates. To test for the visual and oculomotor components of SP and the possible transformation sites, we investigated three experimental conditions: (I) fixation of a stationary target with a second target moving across the retina (visual), (II) pursuit of the moving target with the second target moving in phase (oculomotor), (III) pursuit of the moving target with the second target remaining stationary (visuo-oculomotor). Precise eye movement data were simultaneously measured with the fMRI data. Visual components of activation during SP were located in the motion-sensitive, temporo-parieto-occipital region MT+ and the right posterior parietal cortex (PPC). Motor components comprised more widespread activation in these regions and additional activations in the frontal and supplementary eye fields (FEF, SEF), the cingulate gyrus and precuneus. The combined visuo-oculomotor stimulus revealed additional activation in the putamen. Possible transformation sites were found in MT+ and PPC. The MT+ activation evoked by the motion of a single visual dot was very localized, while the activation of the same single dot motion driving the eye was rather extended across MT+. The eye movement information appeared to be dispersed across the visual map of MT+. This could be interpreted as a transfer of the one-dimensional eye movement information into the two-dimensional visual map. Potentially, the dispersed information could be used to remap MT+ to space coordinates rather than retinal coordinates and to provide the basis for a motor output control. A similar interpretation holds for our results in the PPC region.
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
Saccadic interception of a moving visual target after a spatiotemporal perturbation.
Fleuriet, Jérome; Goffart, Laurent
2012-01-11
Animals can make saccadic eye movements to intercept a moving object at the right place and time. Such interceptive saccades indicate that, despite variable sensorimotor delays, the brain is able to estimate the current spatiotemporal (hic et nunc) coordinates of a target at saccade end. The present work further tests the robustness of this estimate in the monkey when a change in eye position and a delay are experimentally added before the onset of the saccade and in the absence of visual feedback. These perturbations are induced by brief microstimulation in the deep superior colliculus (dSC). When the microstimulation moves the eyes in the direction opposite to the target motion, a correction saccade brings gaze back on the target path or very near. When it moves the eye in the same direction, the performance is more variable and depends on the stimulated sites. Saccades fall ahead of the target with an error that increases when the stimulation is applied more caudally in the dSC. The numerous cases of compensation indicate that the brain is able to maintain an accurate and robust estimate of the location of the moving target. The inaccuracies observed when stimulating the dSC that encodes the visual field traversed by the target indicate that dSC microstimulation can interfere with signals encoding the target motion path. The results are discussed within the framework of the dual-drive and the remapping hypotheses.
Eye movements in interception with delayed visual feedback.
Cámara, Clara; de la Malla, Cristina; López-Moliner, Joan; Brenner, Eli
2018-07-01
The increased reliance on electronic devices such as smartphones in our everyday life exposes us to various delays between our actions and their consequences. Whereas it is known that people can adapt to such delays, the mechanisms underlying such adaptation remain unclear. To better understand these mechanisms, the current study explored the role of eye movements in interception with delayed visual feedback. In two experiments, eye movements were recorded as participants tried to intercept a moving target with their unseen finger while receiving delayed visual feedback about their own movement. In Experiment 1, the target randomly moved in one of two different directions at one of two different velocities. The delay between the participant's finger movement and movement of the cursor that provided feedback about the finger movements was gradually increased. Despite the delay, participants followed the target with their gaze. They were quite successful at hitting the target with the cursor. Thus, they moved their finger to a position that was ahead of where they were looking. Removing the feedback showed that participants had adapted to the delay. In Experiment 2, the target always moved in the same direction and at the same velocity, while the cursor's delay varied across trials. Participants still always directed their gaze at the target. They adjusted their movement to the delay on each trial, often succeeding to intercept the target with the cursor. Since their gaze was always directed at the target, and they could not know the delay until the cursor started moving, participants must have been using peripheral vision of the delayed cursor to guide it to the target. Thus, people deal with delays by directing their gaze at the target and using both experience from previous trials (Experiment 1) and peripheral visual information (Experiment 2) to guide their finger in a way that will make the cursor hit the target.
Sheridan, Heather; Reingold, Eyal M
2017-03-01
To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.
Intercepting a moving target: On-line or model-based control?
Zhao, Huaiyong; Warren, William H
2017-05-01
When walking to intercept a moving target, people take an interception path that appears to anticipate the target's trajectory. According to the constant bearing strategy, the observer holds the bearing direction of the target constant based on current visual information, consistent with on-line control. Alternatively, the interception path might be based on an internal model of the target's motion, known as model-based control. To investigate these two accounts, participants walked to intercept a moving target in a virtual environment. We degraded the target's visibility by blurring the target to varying degrees in the midst of a trial, in order to influence its perceived speed and position. Reduced levels of visibility progressively impaired interception accuracy and precision; total occlusion impaired performance most and yielded nonadaptive heading adjustments. Thus, performance strongly depended on current visual information and deteriorated qualitatively when it was withdrawn. The results imply that locomotor interception is normally guided by current information rather than an internal model of target motion, consistent with on-line control.
Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary
2013-01-16
Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.
Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary
2013-01-01
Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time. PMID:23325347
Pop-out in visual search of moving targets in the archer fish.
Ben-Tov, Mor; Donchin, Opher; Ben-Shahar, Ohad; Segev, Ronen
2015-03-10
Pop-out in visual search reflects the capacity of observers to rapidly detect visual targets independent of the number of distracting objects in the background. Although it may be beneficial to most animals, pop-out behaviour has been observed only in mammals, where neural correlates are found in primary visual cortex as contextually modulated neurons that encode aspects of saliency. Here we show that archer fish can also utilize this important search mechanism by exhibiting pop-out of moving targets. We explore neural correlates of this behaviour and report the presence of contextually modulated neurons in the optic tectum that may constitute the neural substrate for a saliency map. Furthermore, we find that both behaving fish and neural responses exhibit additive responses to multiple visual features. These findings suggest that similar neural computations underlie pop-out behaviour in mammals and fish, and that pop-out may be a universal search mechanism across all vertebrates.
Effects of sport expertise on representational momentum during timing control.
Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu
2015-04-01
Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.
Target detection in insects: optical, neural and behavioral optimizations.
Gonzalez-Bellido, Paloma T; Fabian, Samuel T; Nordström, Karin
2016-12-01
Motion vision provides important cues for many tasks. Flying insects, for example, may pursue small, fast moving targets for mating or feeding purposes, even when these are detected against self-generated optic flow. Since insects are small, with size-constrained eyes and brains, they have evolved to optimize their optical, neural and behavioral target visualization solutions. Indeed, even if evolutionarily distant insects display different pursuit strategies, target neuron physiology is strikingly similar. Furthermore, the coarse spatial resolution of the insect compound eye might actually be beneficial when it comes to detection of moving targets. In conclusion, tiny insects show higher than expected performance in target visualization tasks. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Acoustic facilitation of object movement detection during self-motion
Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.
2011-01-01
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050
Visual cortex activation in kinesthetic guidance of reaching.
Darling, W G; Seitz, R J; Peltier, S; Tellmann, L; Butler, A J
2007-06-01
The purpose of this research was to determine the cortical circuit involved in encoding and controlling kinesthetically guided reaching movements. We used (15)O-butanol positron emission tomography in ten blindfolded able-bodied volunteers in a factorial experiment in which arm (left/right) used to encode target location and to reach back to the remembered location and hemispace of target location (left/right side of midsagittal plane) varied systematically. During encoding of a target the experimenter guided the hand to touch the index fingertip to an external target and then returned the hand to the start location. After a short delay the subject voluntarily moved the same hand back to the remembered target location. SPM99 analysis of the PET data contrasting left versus right hand reaching showed increased (P < 0.05, corrected) neural activity in the sensorimotor cortex, premotor cortex and posterior parietal lobule (PPL) contralateral to the moving hand. Additional neural activation was observed in prefrontal cortex and visual association areas of occipital and parietal lobes contralateral and ipsilateral to the reaching hand. There was no statistically significant effect of target location in left versus right hemispace nor was there an interaction of hand and hemispace effects. Structural equation modeling showed that parietal lobe visual association areas contributed to kinesthetic processing by both hands but occipital lobe visual areas contributed only during dominant hand kinesthetic processing. This visual processing may also involve visualization of kinesthetically guided target location and use of the same network employed to guide reaches to visual targets when reaching to kinesthetic targets. The present work clearly demonstrates a network for kinesthetic processing that includes higher visual processing areas in the PPL for both upper limbs and processing in occipital lobe visual areas for the dominant limb.
Thaler, Lore; Goodale, Melvyn A.
2011-01-01
Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474
Normal aging delays and compromises early multifocal visual attention during object tracking.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-02-01
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
Visual search for motion-form conjunctions: is form discriminated within the motion system?
von Mühlenen, A; Müller, H J
2001-06-01
Motion-form conjunction search can be more efficient when the target is moving (a moving 45 degrees tilted line among moving vertical and stationary 45 degrees tilted lines) rather than stationary. This asymmetry may be due to aspects of form being discriminated within a motion system representing only moving items, whereas discrimination of stationary items relies on a static form system (J. Driver & P. McLeod, 1992). Alternatively, it may be due to search exploiting differential motion velocity and direction signals generated by the moving-target and distractor lines. To decide between these alternatives, 4 experiments systematically varied the motion-signal information conveyed by the moving target and distractors while keeping their form difference salient. Moving-target search was found to be facilitated only when differential motion-signal information was available. Thus, there is no need to assume that form is discriminated within the motion system.
Saccadic eye movements as an index of perceptual decision-making.
McSorley, Eugene; McCloy, Rachel
2009-10-01
One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options.
Role of the posterior parietal cortex in updating reaching movements to a visual target.
Desmurget, M; Epstein, C M; Turner, R S; Prablanc, C; Alexander, G E; Grafton, S T
1999-06-01
The exact role of posterior parietal cortex (PPC) in visually directed reaching is unknown. We propose that, by building an internal representation of instantaneous hand location, PPC computes a dynamic motor error used by motor centers to correct the ongoing trajectory. With unseen right hands, five subjects pointed to visual targets that either remained stationary or moved during saccadic eye movements. Transcranial magnetic stimulation (TMS) was applied over the left PPC during target presentation. Stimulation disrupted path corrections that normally occur in response to target jumps, but had no effect on those directed at stationary targets. Furthermore, left-hand movement corrections were not blocked, ruling out visual or oculomotor effects of stimulation.
Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.
Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel
2015-08-15
When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.
Attentional enhancement during multiple-object tracking.
Drew, Trafton; McCollough, Andrew W; Horowitz, Todd S; Vogel, Edward K
2009-04-01
What is the role of attention in multiple-object tracking? Does attention enhance target representations, suppress distractor representations, or both? It is difficult to ask this question in a purely behavioral paradigm without altering the very attentional allocation one is trying to measure. In the present study, we used event-related potentials to examine the early visual evoked responses to task-irrelevant probes without requiring an additional detection task. Subjects tracked two targets among four moving distractors and four stationary distractors. Brief probes were flashed on targets, moving distractors, stationary distractors, or empty space. We obtained a significant enhancement of the visually evoked P1 and N1 components (approximately 100-150 msec) for probes on targets, relative to distractors. Furthermore, good trackers showed larger differences between target and distractor probes than did poor trackers. These results provide evidence of early attentional enhancement of tracked target items and also provide a novel approach to measuring attentional allocation during tracking.
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying
2014-07-01
Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.
Dong, Guangheng; Yang, Lizhu; Shen, Yue
2009-08-21
The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K.; Fröhlich, Flavio
2016-01-01
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition. PMID:27025995
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K; Fröhlich, Flavio
2016-03-30
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition.
Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco
2004-04-01
Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. Here we present evidence in favor of a different view: the brain makes the best estimate about target motion based on measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from expected dynamics (kinetics). We projected a virtual target moving vertically downward on a wide screen with different randomized laws of motion. In the first series of experiments, subjects were asked to intercept this target by punching a real ball that fell hidden behind the screen and arrived in synchrony with the visual target. Subjects systematically timed their motor responses consistent with the assumption of gravity effects on an object's mass, even when the visual target did not accelerate. With training, the gravity model was not switched off but adapted to nonaccelerating targets by shifting the time of motor activation. In the second series of experiments, there was no real ball falling behind the screen. Instead the subjects were required to intercept the visual target by clicking a mousebutton. In this case, subjects timed their responses consistent with the assumption of uniform motion in the absence of forces, even when the target actually accelerated. Overall, the results are in accord with the theory that motor responses evoked by visual kinematics are modulated by a prior of the target dynamics. The prior appears surprisingly resistant to modifications based on performance errors.
Zago, Myrka; Lacquaniti, Francesco
2005-08-01
Internal model is a neural mechanism that mimics the dynamics of an object for sensory motor or cognitive functions. Recent research focuses on the issue of whether multiple internal models are learned and switched to cope with a variety of conditions, or single general models are adapted by tuning the parameters. Here we addressed this issue by investigating how the manual interception of a moving target changes with changes of the visual environment. In our paradigm, a virtual target moves vertically downward on a screen with different laws of motion. Subjects are asked to punch a hidden ball that arrives in synchrony with the visual target. By using several different protocols, we systematically found that subjects do not develop a new internal model appropriate for constant speed targets, but they use the default gravity model and reduce the central processing time. The results imply that adaptation to zero-gravity targets involves a compression of temporal processing through the cortical and subcortical regions interconnected with the vestibular cortex, which has previously been shown to be the site of storage of the internal model of gravity.
Visual Search for Motion-Form Conjunctions: Selective Attention to Movement Direction.
Von Mühlenen, Adrian; Müller, Hermann J
1999-07-01
In 2 experiments requiring visual search for conjunctions of motion and form, the authors reinvestigated whether motion-based filtering (e.g., P. McLeod, J. Driver, Z. Dienes, & J. Crisp, 1991) is direction selective and whether cuing of the target direction promotes efficient search performance. In both experiments, the authors varied the number of movement directions in the display and the predictability of the target direction. Search was less efficient when items moved in multiple (2, 3, and 4) directions as compared with just 1 direction. Furthermore, precuing of the target direction facilitated the search, even with "wrap-around" displays, relatively more when items moved in multiple directions. The authors proposed 2 principles to explain that pattern of effects: (a) interference on direction computation between items moving in different directions (e.g., N. Qian & R. A. Andersen, 1994) and (b) selective direction tuning of motion detectors involving a receptive-field contraction (cf. J. Moran & R. Desimone, 1985; S. Treue & J. H. R. Maunsell, 1996).
NASA Technical Reports Server (NTRS)
Huebner, W. P.; Leigh, R. J.; Seidman, S. H.; Thomas, C. W.; Billian, C.; DiScenna, A. O.; Dell'Osso, L. F.
1992-01-01
1. We used a modeling approach to test the hypothesis that, in humans, the smooth pursuit (SP) system provides the primary signal for cancelling the vestibuloocular reflex (VOR) during combined eye-head tracking (CEHT) of a target moving smoothly in the horizontal plane. Separate models for SP and the VOR were developed. The optimal values of parameters of the two models were calculated using measured responses of four subjects to trials of SP and the visually enhanced VOR. After optimal parameter values were specified, each model generated waveforms that accurately reflected the subjects' responses to SP and vestibular stimuli. The models were then combined into a CEHT model wherein the final eye movement command signal was generated as the linear summation of the signals from the SP and VOR pathways. 2. The SP-VOR superposition hypothesis was tested using two types of CEHT stimuli, both of which involved passive rotation of subjects in a vestibular chair. The first stimulus consisted of a "chair brake" or sudden stop of the subject's head during CEHT; the visual target continued to move. The second stimulus consisted of a sudden change from the visually enhanced VOR to CEHT ("delayed target onset" paradigm); as the vestibular chair rotated past the angular position of the stationary visual stimulus, the latter started to move in synchrony with the chair. Data collected during experiments that employed these stimuli were compared quantitatively with predictions made by the CEHT model. 3. During CEHT, when the chair was suddenly and unexpectedly stopped, the eye promptly began to move in the orbit to track the moving target. Initially, gaze velocity did not completely match target velocity, however; this finally occurred approximately 100 ms after the brake onset. The model did predict the prompt onset of eye-in-orbit motion after the brake, but it did not predict that gaze velocity would initially be only approximately 70% of target velocity. One possible explanation for this discrepancy is that VOR gain can be dynamically modulated and, during sustained CEHT, it may assume a lower value. Consequently, during CEHT, a smaller-amplitude SP signal would be needed to cancel the lower-gain VOR. This reduction of the SP signal could account for the attenuated tracking response observed immediately after the brake. We found evidence for the dynamic modulation of VOR gain by noting differences in responses to the onset and offset of head rotation in trials of the visually enhanced VOR.(ABSTRACT TRUNCATED AT 400 WORDS).
Dynamic and predictive links between touch and vision.
Gray, Rob; Tan, Hong Z
2002-07-01
We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.
Contextual effects on smooth-pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-02-01
Segregating a moving object from its visual context is particularly relevant for the control of smooth-pursuit eye movements. We examined the interaction between a moving object and a stationary or moving visual context to determine the role of the context motion signal in driving pursuit. Eye movements were recorded from human observers to a medium-contrast Gaussian dot that moved horizontally at constant velocity. A peripheral context consisted of two vertically oriented sinusoidal gratings, one above and one below the stimulus trajectory, that were either stationary or drifted into the same or opposite direction as that of the target at different velocities. We found that a stationary context impaired pursuit acceleration and velocity and prolonged pursuit latency. A drifting context enhanced pursuit performance, irrespective of its motion direction. This effect was modulated by context contrast and orientation. When a context was briefly perturbed to move faster or slower eye velocity changed accordingly, but only when the context was drifting along with the target. Perturbing a context into the direction orthogonal to target motion evoked a deviation of the eye opposite to the perturbation direction. We therefore provide evidence for the use of absolute and relative motion cues, or motion assimilation and motion contrast, for the control of smooth-pursuit eye movements.
Visual focus stimulator aids in study of the eye's focusing action
NASA Technical Reports Server (NTRS)
Cornsweet, T. N.; Crane, H. D.
1970-01-01
Optical apparatus varies apparent distance of a target image from the eye by means of reflectors that are moved orthogonally to the optical axis between fixed lenses. Apparatus can be pointed at any object, test pattern, or other visual display.
Virtual reality method to analyze visual recognition in mice.
Young, Brent Kevin; Brennan, Jayden Nicole; Wang, Ping; Tian, Ning
2018-01-01
Behavioral tests have been extensively used to measure the visual function of mice. To determine how precisely mice perceive certain visual cues, it is necessary to have a quantifiable measurement of their behavioral responses. Recently, virtual reality tests have been utilized for a variety of purposes, from analyzing hippocampal cell functionality to identifying visual acuity. Despite the widespread use of these tests, the training requirement for the recognition of a variety of different visual targets, and the performance of the behavioral tests has not been thoroughly characterized. We have developed a virtual reality behavior testing approach that can essay a variety of different aspects of visual perception, including color/luminance and motion detection. When tested for the ability to detect a color/luminance target or a moving target, mice were able to discern the designated target after 9 days of continuous training. However, the quality of their performance is significantly affected by the complexity of the visual target, and their ability to navigate on a spherical treadmill. Importantly, mice retained memory of their visual recognition for at least three weeks after the end of their behavioral training.
A Model for the Detection of Moving Targets in Visual Clutter Inspired by Insect Physiology
2008-07-01
paper: SDW PS DCO. References 1. Wagner H (1986) Flight performance and visual control of flight of the free- flying housefly (Musca domestica L) 3...differences in the chasing behaviour of houseflies (musca). Biol Cybern 32: 239–241. 3. Land MF (1997) Visual acuity in insects. Annu Rev Entomol 42: 147
Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features
NASA Astrophysics Data System (ADS)
Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique
2011-12-01
We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.
Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion.
Spering, Miriam; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R
2011-04-01
Success of motor behavior often depends on the ability to predict the path of moving objects. Here we asked whether tracking a visual object with smooth pursuit eye movements helps to predict its motion direction. We developed a paradigm, "eye soccer," in which observers had to either track or fixate a visual target (ball) and judge whether it would have hit or missed a stationary vertical line segment (goal). Ball and goal were presented briefly for 100-500 ms and disappeared from the screen together before the perceptual judgment was prompted. In pursuit conditions, the ball moved towards the goal; in fixation conditions, the goal moved towards the stationary ball, resulting in similar retinal stimulation during pursuit and fixation. We also tested the condition in which the goal was fixated and the ball moved. Motion direction prediction was significantly better in pursuit than in fixation trials, regardless of whether ball or goal served as fixation target. In both fixation and pursuit trials, prediction performance was better when eye movements were accurate. Performance also increased with shorter ball-goal distance and longer presentation duration. A longer trajectory did not affect performance. During pursuit, an efference copy signal might provide additional motion information, leading to the advantage in motion prediction.
Reaching a Moveable Visual Target: Dissociations in Brain Tumour Patients
ERIC Educational Resources Information Center
Buiatti, Tania; Skrap, Miran; Shallice, Tim
2013-01-01
Damage to the posterior parietal cortex (PPC) can lead to Optic Ataxia (OA), in which patients misreach to peripheral targets. Recent research suggested that the PPC might be involved not only in simple reaching tasks toward peripheral targets, but also in changing the hand movement trajectory in real time if the target moves. The present study…
Pilots' Attention Distributions Between Chasing a Moving Target and a Stationary Target.
Li, Wen-Chin; Yu, Chung-San; Braithwaite, Graham; Greaves, Matthew
2016-12-01
Attention plays a central role in cognitive processing; ineffective attention may induce accidents in flight operations. The objective of the current research was to examine military pilots' attention distributions between chasing a moving target and a stationary target. In the current research, 37 mission-ready F-16 pilots participated. Subjects' eye movements were collected by a portable head-mounted eye-tracker during tactical training in a flight simulator. The scenarios of chasing a moving target (air-to-air) and a stationary target (air-to-surface) consist of three operational phases: searching, aiming, and lock-on to the targets. The findings demonstrated significant differences in pilots' percentage of fixation during the searching phase between air-to-air (M = 37.57, SD = 5.72) and air-to-surface (M = 33.54, SD = 4.68). Fixation duration can indicate pilots' sustained attention to the trajectory of a dynamic target during air combat maneuvers. Aiming at the stationary target resulted in larger pupil size (M = 27,105, SD = 6565), reflecting higher cognitive loading than aiming at the dynamic target (M = 23,864, SD = 8762). Pilots' visual behavior is not only closely related to attention distribution, but also significantly associated with task characteristics. Military pilots demonstrated various visual scan patterns for searching and aiming at different types of targets based on the research settings of a flight simulator. The findings will facilitate system designers' understanding of military pilots' cognitive processes during tactical operations. They will assist human-centered interface design to improve pilots' situational awareness. The application of an eye-tracking device integrated with a flight simulator is a feasible and cost-effective intervention to improve the efficiency and safety of tactical training.Li W-C, Yu C-S, Braithwaite G, Greaves M. Pilots' attention distributions between chasing a moving target and a stationary target. Aerosp Med Hum Perform. 2016; 87(12):989-995.
Context effects on smooth pursuit and manual interception of a disappearing target.
Kreyenmeier, Philipp; Fooken, Jolande; Spering, Miriam
2017-07-01
In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments ( n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points. NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points. Copyright © 2017 the American Physiological Society.
Schema generation in recurrent neural nets for intercepting a moving target.
Fleischer, Andreas G
2010-06-01
The grasping of a moving object requires the development of a motor strategy to anticipate the trajectory of the target and to compute an optimal course of interception. During the performance of perception-action cycles, a preprogrammed prototypical movement trajectory, a motor schema, may highly reduce the control load. Subjects were asked to hit a target that was moving along a circular path by means of a cursor. Randomized initial target positions and velocities were detected in the periphery of the eyes, resulting in a saccade toward the target. Even when the target disappeared, the eyes followed the target's anticipated course. The Gestalt of the trajectories was dependent on target velocity. The prediction capability of the motor schema was investigated by varying the visibility range of cursor and target. Motor schemata were determined to be of limited precision, and therefore visual feedback was continuously required to intercept the moving target. To intercept a target, the motor schema caused the hand to aim ahead and to adapt to the target trajectory. The control of cursor velocity determined the point of interception. From a modeling point of view, a neural network was developed that allowed the implementation of a motor schema interacting with feedback control in an iterative manner. The neural net of the Wilson type consists of an excitation-diffusion layer allowing the generation of a moving bubble. This activation bubble runs down an eye-centered motor schema and causes a planar arm model to move toward the target. A bubble provides local integration and straightening of the trajectory during repetitive moves. The schema adapts to task demands by learning and serves as forward controller. On the basis of these model considerations the principal problem of embedding motor schemata in generalized control strategies is discussed.
Saccadic foveation of a moving visual target in the rhesus monkey.
Fleuriet, Jérome; Hugues, Sandrine; Perrinet, Laurent; Goffart, Laurent
2011-02-01
When generating a saccade toward a moving target, the target displacement that occurs during the period spanning from its detection to the saccade end must be taken into account to accurately foveate the target and to initiate its pursuit. Previous studies have shown that these saccades are characterized by a lower peak velocity and a prolonged deceleration phase. In some cases, a second peak eye velocity appears during the deceleration phase, presumably reflecting the late influence of a mechanism that compensates for the target displacement occurring before saccade end. The goal of this work was to further determine in the head restrained monkey the dynamics of this putative compensatory mechanism. A step-ramp paradigm, where the target motion was orthogonal to a target step occurring along the primary axes, was used to estimate from the generated saccades: a component induced by the target step and another one induced by the target motion. Resulting oblique saccades were compared with saccades to a static target with matched horizontal and vertical amplitudes. This study permitted to estimate the time taken for visual motion-related signals to update the programming and execution of saccades. The amplitude of the motion-related component was slightly hypometric with an undershoot that increased with target speed. Moreover, it matched with the eccentricity that the target had 40-60 ms before saccade end. The lack of significant difference in the delay between the onsets of the horizontal and vertical components between saccades directed toward a static target and those aimed at a moving target questions the late influence of the compensatory mechanism. The results are discussed within the framework of the "dual drive" and "remapping" hypotheses.
Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C
2018-05-09
Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.
Systematic distortions of perceptual stability investigated using immersive virtual reality
Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew
2010-01-01
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248
Gregori Grgič, Regina; Calore, Enrico; de'Sperati, Claudio
2016-01-01
Whereas overt visuospatial attention is customarily measured with eye tracking, covert attention is assessed by various methods. Here we exploited Steady-State Visual Evoked Potentials (SSVEPs) - the oscillatory responses of the visual cortex to incoming flickering stimuli - to record the movements of covert visuospatial attention in a way operatively similar to eye tracking (attention tracking), which allowed us to compare motion observation and motion extrapolation with and without eye movements. Observers fixated a central dot and covertly tracked a target oscillating horizontally and sinusoidally. In the background, the left and the right halves of the screen flickered at two different frequencies, generating two SSVEPs in occipital regions whose size varied reciprocally as observers attended to the moving target. The two signals were combined into a single quantity that was modulated at the target frequency in a quasi-sinusoidal way, often clearly visible in single trials. The modulation continued almost unchanged when the target was switched off and observers mentally extrapolated its motion in imagery, and also when observers pointed their finger at the moving target during covert tracking, or imagined doing so. The amplitude of modulation during covert tracking was ∼25-30% of that measured when observers followed the target with their eyes. We used 4 electrodes in parieto-occipital areas, but similar results were achieved with a single electrode in Oz. In a second experiment we tested ramp and step motion. During overt tracking, SSVEPs were remarkably accurate, showing both saccadic-like and smooth pursuit-like modulations of cortical responsiveness, although during covert tracking the modulation deteriorated. Covert tracking was better with sinusoidal motion than ramp motion, and better with moving targets than stationary ones. The clear modulation of cortical responsiveness recorded during both overt and covert tracking, identical for motion observation and motion extrapolation, suggests to include covert attention movements in enactive theories of mental imagery. Copyright © 2015 Elsevier Ltd. All rights reserved.
Selective attention in an insect visual neuron.
Wiederman, Steven D; O'Carroll, David C
2013-01-21
Animals need attention to focus on one target amid alternative distracters. Dragonflies, for example, capture flies in swarms comprising prey and conspecifics, a feat that requires neurons to select one moving target from competing alternatives. Diverse evidence, from functional imaging and physiology to psychophysics, highlights the importance of such "competitive selection" in attention for vertebrates. Analogous mechanisms have been proposed in artificial intelligence and even in invertebrates, yet direct neural correlates of attention are scarce from all animal groups. Here, we demonstrate responses from an identified dragonfly visual neuron that perfectly match a model for competitive selection within limits of neuronal variability (r(2) = 0.83). Responses to individual targets moving at different locations within the receptive field differ in both magnitude and time course. However, responses to two simultaneous targets exclusively track those for one target alone rather than any combination of the pair. Irrespective of target size, contrast, or separation, this neuron selects one target from the pair and perfectly preserves the response, regardless of whether the "winner" is the stronger stimulus if presented alone. This neuron is amenable to electrophysiological recordings, providing neuroscientists with a new model system for studying selective attention. Copyright © 2013 Elsevier Ltd. All rights reserved.
Execution of saccadic eye movements affects speed perception
Goettker, Alexander; Braun, Doris I.; Schütz, Alexander C.; Gegenfurtner, Karl R.
2018-01-01
Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades. PMID:29440494
NASA Astrophysics Data System (ADS)
Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Saccades to remembered targets: the effects of smooth pursuit and illusory stimulus motion
NASA Technical Reports Server (NTRS)
Zivotofsky, A. Z.; Rottach, K. G.; Averbuch-Heller, L.; Kori, A. A.; Thomas, C. W.; Dell'Osso, L. F.; Leigh, R. J.
1996-01-01
1. Measurements were made in four normal human subjects of the accuracy of saccades to remembered locations of targets that were flashed on a 20 x 30 deg random dot display that was either stationary or moving horizontally and sinusoidally at +/-9 deg at 0.3 Hz. During the interval between the target flash and the memory-guided saccade, the "memory period" (1.4 s), subjects either fixated a stationary spot or pursued a spot moving vertically sinusoidally at +/-9 deg at 0.3 Hz. 2. When saccades were made toward the location of targets previously flashed on a stationary background as subjects fixated the stationary spot, median saccadic error was 0.93 deg horizontally and 1.1 deg vertically. These errors were greater than for saccades to visible targets, which had median values of 0.59 deg horizontally and 0.60 deg vertically. 3. When targets were flashed as subjects smoothly pursued a spot that moved vertically across the stationary background, median saccadic error was 1.1 deg horizontally and 1.2 deg vertically, thus being of similar accuracy to when targets were flashed during fixation. In addition, the vertical component of the memory-guided saccade was much more closely correlated with the "spatial error" than with the "retinal error"; this indicated that, when programming the saccade, the brain had taken into account eye movements that occurred during the memory period. 4. When saccades were made to targets flashed during attempted fixation of a stationary spot on a horizontally moving background, a condition that produces a weak Duncker-type illusion of horizontal movement of the primary target, median saccadic error increased horizontally to 3.2 deg but was 1.1 deg vertically. 5. When targets were flashed as subjects smoothly pursued a spot that moved vertically on the horizontally moving background, a condition that induces a strong illusion of diagonal target motion, median saccadic error was 4.0 deg horizontally and 1.5 deg vertically; thus the horizontal error was greater than under any other experimental condition. 6. In most trials, the initial saccade to the remembered target was followed by additional saccades while the subject was still in darkness. These secondary saccades, which were executed in the absence of visual feedback, brought the eye closer to the target location. During paradigms involving horizontal background movement, these corrections were more prominent horizontally than vertically. 7. Further measurements were made in two subjects to determine whether inaccuracy of memory-guided saccades, in the horizontal plane, was due to mislocalization at the time that the target flashed, misrepresentation of the trajectory of the pursuit eye movement during the memory period, or both. 8. The magnitude of the saccadic error, both with and without corrections made in darkness, was mislocalized by approximately 30% of the displacement of the background at the time that the target flashed. The magnitude of the saccadic error also was influenced by net movement of the background during the memory period, corresponding to approximately 25% of net background movement for the initial saccade and approximately 13% for the final eye position achieved in darkness. 9. We formulated simple linear models to test specific hypotheses about which combinations of signals best describe the observed saccadic amplitudes. We tested the possibilities that the brain made an accurate memory of target location and a reliable representation of the eye movement during the memory period, or that one or both of these was corrupted by the illusory visual stimulus. Our data were best accounted for by a model in which both the working memory of target location and the internal representation of the horizontal eye movements were corrupted by the illusory visual stimulus. We conclude that extraretinal signals played only a minor role, in comparison with visual estimates of the direction of gaze, in planning eye movements to remembered targ.
ERIC Educational Resources Information Center
Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara
2016-01-01
The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…
Exhausting Attentional Tracking Resources with a Single Fast-Moving Object
ERIC Educational Resources Information Center
Holcombe, Alex O.; Chen, Wei-Ying
2012-01-01
Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional…
Analysis of EEG Related Saccadic Eye Movement
NASA Astrophysics Data System (ADS)
Funase, Arao; Kuno, Yoshiaki; Okuma, Shigeru; Yagi, Tohru
Our final goal is to establish the model for saccadic eye movement that connects the saccade and the electroencephalogram(EEG). As the first step toward this goal, we recorded and analyzed the saccade-related EEG. In the study recorded in this paper, we tried detecting a certain EEG that is peculiar to the eye movement. In these experiments, each subject was instructed to point their eyes toward visual targets (LEDs) or the direction of the sound sources (buzzers). In the control cases, the EEG was recorded in the case of no eye movemens. As results, in the visual experiments, we found that the potential of EEG changed sharply on the occipital lobe just before eye movement. Furthermore, in the case of the auditory experiments, similar results were observed. In the case of the visual experiments and auditory experiments without eye movement, we could not observed the EEG changed sharply. Moreover, when the subject moved his/her eyes toward a right-side target, a change in EEG potential was found on the right occipital lobe. On the contrary, when the subject moved his/her eyes toward a left-side target, a sharp change in EEG potential was found on the left occipital lobe.
Alvarez, George A.; Cavanagh, Patrick
2014-01-01
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651
NASA Astrophysics Data System (ADS)
Bagheri, Zahra M.; Cazzolato, Benjamin S.; Grainger, Steven; O'Carroll, David C.; Wiederman, Steven D.
2017-08-01
Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from ‘small target motion detector’ neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system.
Kinesthetic information facilitates saccades towards proprioceptive-tactile targets.
Voudouris, Dimitris; Goettker, Alexander; Mueller, Stefanie; Fiehler, Katja
2016-05-01
Saccades to somatosensory targets have longer latencies and are less accurate and precise than saccades to visual targets. Here we examined how different somatosensory information influences the planning and control of saccadic eye movements. Participants fixated a central cross and initiated a saccade as fast as possible in response to a tactile stimulus that was presented to either the index or the middle fingertip of their unseen left hand. In a static condition, the hand remained at a target location for the entire block of trials and the stimulus was presented at a fixed time after an auditory tone. Therefore, the target location was derived only from proprioceptive and tactile information. In a moving condition, the hand was first actively moved to the same target location and the stimulus was then presented immediately. Thus, in the moving condition additional kinesthetic information about the target location was available. We found shorter saccade latencies in the moving compared to the static condition, but no differences in accuracy or precision of saccadic endpoints. In a second experiment, we introduced variable delays after the auditory tone (static condition) or after the end of the hand movement (moving condition) in order to reduce the predictability of the moment of the stimulation and to allow more time to process the kinesthetic information. Again, we found shorter latencies in the moving compared to the static condition but no improvement in saccade accuracy or precision. In a third experiment, we showed that the shorter saccade latencies in the moving condition cannot be explained by the temporal proximity between the relevant event (auditory tone or end of hand movement) and the moment of the stimulation. Our findings suggest that kinesthetic information facilitates planning, but not control, of saccadic eye movements to proprioceptive-tactile targets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Synchronizing the tracking eye movements with the motion of a visual target: Basic neural processes.
Goffart, Laurent; Bourrelly, Clara; Quinet, Julie
2017-01-01
In primates, the appearance of an object moving in the peripheral visual field elicits an interceptive saccade that brings the target image onto the foveae. This foveation is then maintained more or less efficiently by slow pursuit eye movements and subsequent catch-up saccades. Sometimes, the tracking is such that the gaze direction looks spatiotemporally locked onto the moving object. Such a spatial synchronism is quite spectacular when one considers that the target-related signals are transmitted to the motor neurons through multiple parallel channels connecting separate neural populations with different conduction speeds and delays. Because of the delays between the changes of retinal activity and the changes of extraocular muscle tension, the maintenance of the target image onto the fovea cannot be driven by the current retinal signals as they correspond to past positions of the target. Yet, the spatiotemporal coincidence observed during pursuit suggests that the oculomotor system is driven by a command estimating continuously the current location of the target, i.e., where it is here and now. This inference is also supported by experimental perturbation studies: when the trajectory of an interceptive saccade is experimentally perturbed, a correction saccade is produced in flight or after a short delay, and brings the gaze next to the location where unperturbed saccades would have landed at about the same time, in the absence of visual feedback. In this chapter, we explain how such correction can be supported by previous visual signals without assuming "predictive" signals encoding future target locations. We also describe the basic neural processes which gradually yield the synchronization of eye movements with the target motion. When the process fails, the gaze is driven by signals related to past locations of the target, not by estimates to its upcoming locations, and a catch-up is made to reinitiate the synchronization. © 2017 Elsevier B.V. All rights reserved.
Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.
Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H
2013-07-01
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.
Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin
2016-01-01
The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.
Does the Brain Extrapolate the Position of a Transient Moving Target?
Quinet, Julie; Goffart, Laurent
2015-08-26
When an object moves in the visual field, its motion evokes a streak of activity on the retina and the incoming retinal signals lead to robust oculomotor commands because corrections are observed if the trajectory of the interceptive saccade is perturbed by a microstimulation in the superior colliculus. The present study complements a previous perturbation study by investigating, in the head-restrained monkey, the generation of saccades toward a transient moving target (100-200 ms). We tested whether the saccades land on the average of antecedent target positions or beyond the location where the target disappeared. Using target motions with different speed profiles, we also examined the sensitivity of the process that converts time-varying retinal signals into saccadic oculomotor commands. The results show that, for identical overall target displacements on the visual display, saccades toward a faster target land beyond the endpoint of saccades toward a target moving slower. The rate of change in speed matters in the visuomotor transformation. Indeed, in response to identical overall target displacements and durations, the saccades have smaller amplitude when they are made in response to an accelerating target than to a decelerating one. Moreover, the motion-related signals have different weights depending upon their timing relative to the target onset: early signals are more influential in the specification of saccade amplitude than later signals. We discuss the "predictive" properties of the visuo-saccadic system and the nature of this location where the saccades land, after providing some critical comments to the "hic-et-nunc" hypothesis (Fleuriet and Goffart, 2012). Complementing the work of Fleuriet and Goffart (2012), this study is a contribution to the more general scientific research aimed at understanding how ongoing action is dynamically and adaptively adjusted to the current spatiotemporal aspects of its goal. Using the saccadic eye movement as a probe, we provide results that are critical for investigating and understanding the neural basis of motion extrapolation and prediction. Copyright © 2015 the authors 0270-6474/15/3511780-11$15.00/0.
Etchells, Peter J; Benton, Christopher P; Ludwig, Casimir J H; Gilchrist, Iain D
2011-01-01
A growing number of studies in vision research employ analyses of how perturbations in visual stimuli influence behavior on single trials. Recently, we have developed a method along such lines to assess the time course over which object velocity information is extracted on a trial-by-trial basis in order to produce an accurate intercepting saccade to a moving target. Here, we present a simplified version of this methodology, and use it to investigate how changes in stimulus contrast affect the temporal velocity integration window used when generating saccades to moving targets. Observers generated saccades to one of two moving targets which were presented at high (80%) or low (7.5%) contrast. In 50% of trials, target velocity stepped up or down after a variable interval after the saccadic go signal. The extent to which the saccade endpoint can be accounted for as a weighted combination of the pre- or post-step velocities allows for identification of the temporal velocity integration window. Our results show that the temporal integration window takes longer to peak in the low when compared to high contrast condition. By enabling the assessment of how information such as changes in velocity can be used in the programming of a saccadic eye movement on single trials, this study describes and tests a novel methodology with which to look at the internal processing mechanisms that transform sensory visual inputs into oculomotor outputs.
Visual cues that are effective for contextual saccade adaptation
Azadi, Reza
2014-01-01
The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. PMID:24647429
Prete, Frederick R; Komito, Justin L; Dominguez, Salina; Svenson, Gavin; López, LeoLin Y; Guillen, Alex; Bogdanivich, Nicole
2011-09-01
We assessed the differences in appetitive responses to visual stimuli by three species of praying mantis (Insecta: Mantodea), Tenodera aridifolia sinensis, Mantis religiosa, and Cilnia humeralis. Tethered, adult females watched computer generated stimuli (erratically moving disks or linearly moving rectangles) that varied along predetermined parameters. Three responses were scored: tracking, approaching, and striking. Threshold stimulus size (diameter) for tracking and striking at disks ranged from 3.5 deg (C. humeralis) to 7.8 deg (M. religiosa), and from 3.3 deg (C. humeralis) to 11.7 deg (M. religiosa), respectively. Unlike the other species which struck at disks as large as 44 deg, T. a. sinensis displayed a preference for 14 deg disks. Disks moving at 143 deg/s were preferred by all species. M. religiosa exhibited the most approaching behavior, and with T. a. sinensis distinguished between rectangular stimuli moving parallel versus perpendicular to their long axes. C. humeralis did not make this distinction. Stimulus sizes that elicited the target behaviors were not related to mantis size. However, differences in compound eye morphology may be related to species differences: C. humeralis' eyes are farthest apart, and it has an apparently narrower binocular visual field which may affect retinal inputs to movement-sensitive visual interneurons.
Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick
2014-08-27
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.
Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Srihasam, Krishna; Bullock, Daniel; Grossberg, Stephen
2009-01-01
Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do…
Compatibility of motion facilitates visuomotor synchronization.
Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L
2010-12-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.
Holmes, Nicholas P; Dakwar, Azar R
2015-12-01
Movements aimed towards objects occasionally have to be adjusted when the object moves. These online adjustments can be very rapid, occurring in as little as 100ms. More is known about the latency and neural basis of online control of movements to visual than to auditory target objects. We examined the latency of online corrections in reaching-to-point movements to visual and auditory targets that could change side and/or modality at movement onset. Visual or auditory targets were presented on the left or right sides, and participants were instructed to reach and point to them as quickly and as accurately as possible. On half of the trials, the targets changed side at movement onset, and participants had to correct their movements to point to the new target location as quickly as possible. Given different published approaches to measuring the latency for initiating movement corrections, we examined several different methods systematically. What we describe here as the optimal methods involved fitting a straight-line model to the velocity of the correction movement, rather than using a statistical criterion to determine correction onset. In the multimodal experiment, these model-fitting methods produced significantly lower latencies for correcting movements away from the auditory targets than away from the visual targets. Our results confirm that rapid online correction is possible for auditory targets, but further work is required to determine whether the underlying control system for reaching and pointing movements is the same for auditory and visual targets. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modulation of high-frequency vestibuloocular reflex during visual tracking in humans
NASA Technical Reports Server (NTRS)
Das, V. E.; Leigh, R. J.; Thomas, C. W.; Averbuch-Heller, L.; Zivotofsky, A. Z.; Discenna, A. O.; Dell'Osso, L. F.
1995-01-01
1. Humans may visually track a moving object either when they are stationary or in motion. To investigate visual-vestibular interaction during both conditions, we compared horizontal smooth pursuit (SP) and active combined eye-head tracking (CEHT) of a target moving sinusoidally at 0.4 Hz in four normal subjects while the subjects were either stationary or vibrated in yaw at 2.8 Hz. We also measured the visually enhanced vestibuloocular reflex (VVOR) during vibration in yaw at 2.8 Hz over a peak head velocity range of 5-40 degrees/s. 2. We found that the gain of the VVOR at 2.8 Hz increased in all four subjects as peak head velocity increased (P < 0.001), with minimal phase changes, such that mean retinal image slip was held below 5 degrees/s. However, no corresponding modulation in vestibuloocular reflex gain occurred with increasing peak head velocity during a control condition when subjects were rotated in darkness. 3. During both horizontal SP and CEHT, tracking gains were similar, and the mean slip speed of the target's image on the retina was held below 5.5 degrees/s whether subjects were stationary or being vibrated at 2.8 Hz. During both horizontal SP and CEHT of target motion at 0.4 Hz, while subjects were vibrated in yaw, VVOR gain for the 2.8-Hz head rotations was similar to or higher than that achieved during fixation of a stationary target. This is in contrast to the decrease of VVOR gain that is reported while stationary subjects perform CEHT.(ABSTRACT TRUNCATED AT 250 WORDS).
Ceux, Tanja; Montagne, Gilles; Buekers, Martinus J
2010-12-01
The present study examined whether the beneficial role of coherently grouped visual motion structures for performing complex (interlimb) coordination patterns can be generalized to synchronization behavior in a visuo-proprioceptive conflict situation. To achieve this goal, 17 participants had to synchronize a self-moved circle, representing the arm movement, with a visual target signal corresponding to five temporally shifted visual feedback conditions (0%, 25%, 50%, 75%, and 100% of the target cycle duration) in three synchronization modes (in-phase, anti-phase, and intermediate). The results showed that the perception of a newly generated perceptual Gestalt between the visual feedback of the arm and the target signal facilitated the synchronization performance in the preferred in-phase synchronization mode in contrast to the less stable anti-phase and intermediate mode. Our findings suggest that the complexity of the synchronization mode defines to what extent the visual and/or proprioceptive information source affects the synchronization performance in the present unimanual synchronization task. Copyright © 2010 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Watson, Derrick G.; Kunar, Melina A.
2010-01-01
Visual search efficiency improves by presenting (previewing) one set of distractors before the target and remaining distractor items (D. G. Watson & G. W. Humphreys, 1997). Previous work has shown that this preview benefit is abolished if the old items change their shape when the new items are added (e.g., D. G. Watson & G. W. Humphreys,…
Updating visual memory across eye movements for ocular and arm motor control.
Thompson, Aidan A; Henriques, Denise Y P
2008-11-01
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Hesse, Constanze; Schenk, Thomas
2014-05-01
It has been suggested that while movements directed at visible targets are processed within the dorsal stream, movements executed after delay rely on the visual representations of the ventral stream (Milner & Goodale, 2006). This interpretation is supported by the observation that a patient with ventral stream damage (D.F.) has trouble performing accurate movements after a delay, but performs normally when the target is visible during movement programming. We tested D.F.'s visuomotor performance in a letter-posting task whilst varying the amount of visual feedback available. Additionally, we also varied whether D.F. received tactile feedback at the end of each trial (posting through a letter box vs posting on a screen) and whether environmental cues were available during the delay period (removing the target only vs suppressing vision completely with shutter glasses). We found that in the absence of environmental cues patient D.F. was unaffected by the introduction of delay and performed as accurately as healthy controls. However, when environmental cues and vision of the moving hand were available during and after the delay period, D.F.'s visuomotor performance was impaired. Thus, while healthy controls benefit from the availability of environmental landmarks and/or visual feedback of the moving hand, such cues seem less beneficial to D.F. Taken together our findings suggest that ventral stream damage does not always impact the ability to make delayed movements but compromises the ability to use environmental landmarks and visual feedback efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.
Holcombe, Alex O; Chen, Wei-Ying
2013-01-09
Overall performance when tracking moving targets is known to be poorer for larger numbers of targets, but the specific effect on tracking's temporal resolution has never been investigated. We document a broad range of display parameters for which visual tracking is limited by temporal frequency (the interval between when a target is at each location and a distracter moves in and replaces it) rather than by object speed. We tested tracking of one, two, and three moving targets while the eyes remained fixed. Variation of the number of distracters and their speed revealed both speed limits and temporal frequency limits on tracking. The temporal frequency limit fell from 7 Hz with one target to 4 Hz with two targets and 2.6 Hz with three targets. The large size of this performance decrease implies that in the two-target condition participants would have done better by tracking only one of the two targets and ignoring the other. These effects are predicted by serial models involving a single tracking focus that must switch among the targets, sampling the position of only one target at a time. If parallel processing theories are to explain why dividing the tracking resource reduces temporal resolution so markedly, supplemental assumptions will be required.
Real-time decoding of the direction of covert visuospatial attention
NASA Astrophysics Data System (ADS)
Andersson, Patrik; Ramsey, Nick F.; Raemaekers, Mathijs; Viergever, Max A.; Pluim, Josien P. W.
2012-08-01
Brain-computer interfaces (BCIs) make it possible to translate a person’s intentions into actions without depending on the muscular system. Brain activity is measured and classified into commands, thereby creating a direct link between the mind and the environment, enabling, e.g., cursor control or navigation of a wheelchair or robot. Most BCI research is conducted with scalp EEG but recent developments move toward intracranial electrodes for paralyzed people. The vast majority of BCI studies focus on the motor system as the appropriate target for recording and decoding movement intentions. However, properties of the visual system may make the visual system an attractive and intuitive alternative. We report on a study investigating feasibility of decoding covert visuospatial attention in real time, exploiting the full potential of a 7 T MRI scanner to obtain the necessary signal quality, capitalizing on earlier fMRI studies indicating that covert visuospatial attention changes activity in the visual areas that respond to stimuli presented in the attended area of the visual field. Healthy volunteers were instructed to shift their attention from the center of the screen to one of four static targets in the periphery, without moving their eyes from the center. During the first part of the fMRI-run, the relevant brain regions were located using incremental statistical analysis. During the second part, the activity in these regions was extracted and classified, and the subject was given visual feedback of the result. Performance was assessed as the number of trials where the real-time classifier correctly identified the direction of attention. On average, 80% of trials were correctly classified (chance level <25%) based on a single image volume, indicating very high decoding performance. While we restricted the experiment to five attention target regions (four peripheral and one central), the number of directions can be higher provided the brain activity patterns can be distinguished. In summary, the visual system promises to be an effective target for BCI control.
A unified dynamic neural field model of goal directed eye movements
NASA Astrophysics Data System (ADS)
Quinton, J. C.; Goffart, L.
2018-01-01
Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.
Strategies used to walk through a moving aperture.
Cinelli, Michael E; Patla, Aftab E; Allard, Fran
2008-05-01
The objectives of the study were to determine what strategy (pursuit or interception) individuals used to pass through an oscillating target and to determine if individuals walked towards where they were looking. Kinematic and gaze behaviour data was collected from seven healthy female participants as they started at one of five different starting positions and walked 7 m towards an oscillating target. The target was a two-dimensional 70 cm aperture made by two-76 cm wide doors and oscillated between two end posts that were 300 cm apart. In order to quantify the objectives, target-heading angles [Fajen BR, Warren WH. Behavioral dynamics of steering, obstacle avoidance, and route selection. J Exp Psychol Hum Percept Perform 2003;29(2):343-62; Fajen BR, Warren WH. Visual guidance of intercepting a moving target on foot. Perception 2004;33:689-715] were calculated. Results showed that the participants used neither an interception nor a pursuit strategy to successfully pass through the moving aperture. The participants steered towards the middle of the pathway prior to passing through the middle of the aperture. A cross correlation between the horizontal gaze locations and the medial/lateral (M/L) location of the participants' center of mass (COM) was performed. The results from the cross correlation show that during the final 2s prior to crossing the aperture, the participants walked where they were looking. The findings from this study suggest that individuals simplify a task by decreasing the perceptual load until the final stages. In this way the final stages of this task were visually driven.
Drew, Trafton; Horowitz, Todd S.; Wolfe, Jeremy M.; Vogel, Edward K.
2015-01-01
In the attentive tracking task, observers track multiple objects as they move independently and unpredictably among visually identical distractors. Although a number of models of attentive tracking implicate visual working memory as the mechanism responsible for representing target locations, no study has ever directly compared the neural mechanisms of the two tasks. In the current set of experiments, we used electrophysiological recordings to delineate similarities and differences between the neural processing involved in working memory and attentive tracking. We found that the contralateral electrophysiological response to the two tasks was similarly sensitive to the number of items attended in both tasks but that there was also a unique contralateral negativity related to the process of monitoring target position during tracking. This signal was absent for periods of time during tracking tasks when objects briefly stopped moving. These results provide evidence that, during attentive tracking, the process of tracking target locations elicits an electrophysiological response that is distinct and dissociable from neural measures of the number of items being attended. PMID:21228175
Domkin, Dmitry; Forsman, Mikael; Richter, Hans O
2016-06-01
Previous studies have shown an association of visual demands during near work and increased activity of the trapezius muscle. Those studies were conducted under stationary postural conditions with fixed gaze and artificial visual load. The present study investigated the relationship between ciliary muscle contraction force and trapezius muscle activity across individuals during performance of a natural dynamic motor task under free gaze conditions. Participants (N=11) tracked a moving visual target with a digital pen on a computer screen. Tracking performance, eye refraction and trapezius muscle activity were continuously measured. Ciliary muscle contraction force was computed from eye accommodative response. There was a significant Pearson correlation between ciliary muscle contraction force and trapezius muscle activity on the tracking side (0.78, p<0.01) and passive side (0.64, p<0.05). The study supports the hypothesis that high visual demands, leading to an increased ciliary muscle contraction during continuous eye-hand coordination, may increase trapezius muscle tension and thus contribute to the development of musculoskeletal complaints in the neck-shoulder area. Further experimental studies are required to clarify whether the relationship is valid within each individual or may represent a general personal trait, when individuals with higher eye accommodative response tend to have higher trapezius muscle activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Motion coherence and conjunction search: implications for guided search theory.
Driver, J; McLeod, P; Dienes, Z
1992-01-01
Feature integration theory has recently been revised with two proposals that visual conjunction search can be parallel under some circumstances--either because items with nontarget features are inhibited, or because items with target features are excited. We examined whether excitatory or inhibitory guidance controlled conjunction search for an X oscillating in one direction among Os oscillating in that direction and Xs oscillating in another. Search was affected by whether items oscillated in phase with each other, and it was exceptionally difficult when items with target motion moved out of phase with each other and items with nontarget motion moved out of phase. The results suggest that conjunction search can be guided both by excitation of target features and by inhibition of nontarget features.
Visual cues that are effective for contextual saccade adaptation.
Azadi, Reza; Harwood, Mark R
2014-06-01
The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. Copyright © 2014 the American Physiological Society.
What triggers catch-up saccades during visual tracking?
de Brouwer, Sophie; Yuksel, Demet; Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe
2002-03-01
When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).
Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2011-10-01
The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Effects of age and eccentricity on visual target detection.
Gruber, Nicole; Müri, René M; Mosimann, Urs P; Bieri, Rahel; Aeschimann, Andrea; Zito, Giuseppe A; Urwyler, Prabitha; Nyffeler, Thomas; Nef, Tobias
2013-01-01
The aim of this study was to examine the effects of aging and target eccentricity on a visual search task comprising 30 images of everyday life projected into a hemisphere, realizing a ±90° visual field. The task performed binocularly allowed participants to freely move their eyes to scan images for an appearing target or distractor stimulus (presented at 10°; 30°, and 50° eccentricity). The distractor stimulus required no response, while the target stimulus required acknowledgment by pressing the response button. One hundred and seventeen healthy subjects (mean age = 49.63 years, SD = 17.40 years, age range 20-78 years) were studied. The results show that target detection performance decreases with age as well as with increasing eccentricity, especially for older subjects. Reaction time also increases with age and eccentricity, but in contrast to target detection, there is no interaction between age and eccentricity. Eye movement analysis showed that younger subjects exhibited a passive search strategy while older subjects exhibited an active search strategy probably as a compensation for their reduced peripheral detection performance.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.
Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-08-23
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor
Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-01-01
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520
The Perception of the Higher Derivatives of Visual Motion.
1986-06-24
uniform velocity in one run with a target mov- ing with either an accelerating or decelerating motion on another run , and had to decide on which of...the two runs the motion was uniform. It was found that sensitivity to acceleration (as indicated by proportion of correct dis- criminations) decreased...20 subjects had 8 In an experiment by Runeson (1975), one target (the stan- tracking runs with each of the three tvpes of moving target. The third
Motion-Induced Blindness and Troxler Fading: Common and Different Mechanisms
Bonneh, Yoram S.; Donner, Tobias H.; Cooperman, Alexander; Heeger, David J.; Sagi, Dov
2014-01-01
Extended stabilization of gaze leads to disappearance of dim visual targets presented peripherally. This phenomenon, known as Troxler fading, is thought to result from neuronal adaptation. Intense targets also disappear intermittently when surrounded by a moving pattern (the “mask”), a phenomenon known as motion-induced blindness (MIB). The similar phenomenology and dynamics of these disappearances may suggest that also MIB is, likewise, solely due to adaptation, which may be amplified by the presence of the mask. Here we directly compared the dependence of both phenomena on target contrast. Observers reported the disappearance and reappearance of a target of varying intensity (contrast levels: 8%–80%). MIB was induced by adding a mask that moved at one of various different speeds. The results revealed a lawful effect of contrast in both MIB and Troxler fading, but with opposite trends. Increasing target contrast increased (doubled) the rate of disappearance events for MIB, but decreased the disappearance rate to half in Troxler fading. The target mean invisible period decreased equally strongly with target contrast in MIB and in Troxler fading. The results suggest that both MIB and Troxler are equally affected by contrast adaptation, but that the rate of MIB is governed by an additional mechanism, possibly involving antagonistic processes between neuronal populations processing target and mask. Our results link MIB to other bi-stable visual phenomena that involve neuronal competition (such as binocular rivalry), which exhibit an analogous dependency on the strength of the competing stimulus components. PMID:24658600
Effects of visual motion consistent or inconsistent with gravity on postural sway.
Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo
2017-07-01
Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.
A novel visual saliency detection method for infrared video sequences
NASA Astrophysics Data System (ADS)
Wang, Xin; Zhang, Yuzhen; Ning, Chen
2017-12-01
Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.
NASA Technical Reports Server (NTRS)
Das, V. E.; Thomas, C. W.; Zivotofsky, A. Z.; Leigh, R. J.
1996-01-01
Video-based eye-tracking systems are especially suited to studying eye movements during naturally occurring activities such as locomotion, but eye velocity records suffer from broad band noise that is not amenable to conventional filtering methods. We evaluated the effectiveness of combined median and moving-average filters by comparing prefiltered and postfiltered records made synchronously with a video eye-tracker and the magnetic search coil technique, which is relatively noise free. Root-mean-square noise was reduced by half, without distorting the eye velocity signal. To illustrate the practical use of this technique, we studied normal subjects and patients with deficient labyrinthine function and compared their ability to hold gaze on a visual target that moved with their heads (cancellation of the vestibulo-ocular reflex). Patients and normal subjects performed similarly during active head rotation but, during locomotion, patients held their eyes more steadily on the visual target than did subjects.
NASA Technical Reports Server (NTRS)
Stewart, E. C.; Cannaday, R. L.
1973-01-01
A comparison of the results from a fixed-base, six-degree-of -freedom simulator and a moving-base, three-degree-of-freedom simulator was made for a close-in, EVA-type maneuvering task in which visual cues of a target spacecraft were used for guidance. The maneuvering unit (the foot-controlled maneuvering unit of Skylab Experiment T020) employed an on-off acceleration command control system operated entirely by the feet. Maneuvers by two test subjects were made for the fixed-base simulator in six and three degrees of freedom and for the moving-base simulator in uncontrolled and controlled, EVA-type visual cue conditions. Comparisons of pilot ratings and 13 different quantitative parameters from the two simulators are made. Different results were obtained from the two simulators, and the effects of limited degrees of freedom and uncontrolled visual cues are discussed.
Causal Inference for Spatial Constancy across Saccades
Atsma, Jeroen; Maij, Femke; Koppen, Mathieu; Irwin, David E.; Medendorp, W. Pieter
2016-01-01
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability. PMID:26967730
Separate visual representations for perception and for visually guided behavior
NASA Technical Reports Server (NTRS)
Bridgeman, Bruce
1989-01-01
Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.
Guidance for Development of a Flight Simulator Specification
2007-05-01
the simulated line of sight to the moon is less than one degree, and that the moon appears to move smoothly across the visual scene. The phase of the...Agencies have adopted the definition used by Optics Companies (this definition has also been adopted in this revision of the Air Force Guide...simulators that require tracking the target as it slues across the displayed scene, such as with air -to-ground or air -to- air combat tasks. Visual systems
Predictive encoding of moving target trajectory by neurons in the parabigeminal nucleus
Ma, Rui; Cui, He; Lee, Sang-Hun; Anastasio, Thomas J.
2013-01-01
Intercepting momentarily invisible moving objects requires internally generated estimations of target trajectory. We demonstrate here that the parabigeminal nucleus (PBN) encodes such estimations, combining sensory representations of target location, extrapolated positions of briefly obscured targets, and eye position information. Cui and Malpeli (Cui H, Malpeli JG. J Neurophysiol 89: 3128–3142, 2003) reported that PBN activity for continuously visible tracked targets is determined by retinotopic target position. Here we show that when cats tracked moving, blinking targets the relationship between activity and target position was similar for ON and OFF phases (400 ms for each phase). The dynamic range of activity evoked by virtual targets was 94% of that of real targets for the first 200 ms after target offset and 64% for the next 200 ms. Activity peaked at about the same best target position for both real and virtual targets. PBN encoding of target position takes into account changes in eye position resulting from saccades, even without visual feedback. Since PBN response fields are retinotopically organized, our results suggest that activity foci associated with real and virtual targets at a given target position lie in the same physical location in the PBN, i.e., a retinotopic as well as a rate encoding of virtual-target position. We also confirm that PBN activity is specific to the intended target of a saccade and is predictive of which target will be chosen if two are offered. A Bayesian predictor-corrector model is presented that conceptually explains the differences in the dynamic ranges of PBN neuronal activity evoked during tracking of real and virtual targets. PMID:23365185
Johnson, Christine M; Sullivan, Jess; Buck, Cara L; Trexel, Julie; Scarpuzzi, Mike
2015-01-01
Anticipating the location of a temporarily obscured target-what Piaget (the construction of reality in the child. Basic Books, New York, 1954) called "object permanence"-is a critical skill, especially in hunters of mobile prey. Previous research with bottlenose dolphins found they could predict the location of a target that had been visibly displaced into an opaque container, but not one that was first placed in an opaque container and then invisibly displaced to another container. We tested whether, by altering the task to involve occlusion rather than containment, these animals could show more advanced object permanence skills. We projected dynamic visual displays at an underwater-viewing window and videotaped the animals' head moves while observing these displays. In Experiment 1, the animals observed a small black disk moving behind occluders that shifted in size, ultimately forming one large occluder. Nine out of ten subjects "tracked" the presumed movement of the disk behind this occluder on their first trial-and in a statistically significant number of subsequent trials-confirming their visible displacement abilities. In Experiment 2, we tested their invisible displacement abilities. The disk first disappeared behind a pair of moving occluders, which then moved behind a stationary occluder. The moving occluders then reappeared and separated, revealing that the disk was no longer behind them. The subjects subsequently looked to the correct stationary occluder on eight of their ten first trials, and in a statistically significant number of subsequent trials. Thus, by altering the stimuli to be more ecologically valid, we were able to show that the dolphins could indeed succeed at an invisible displacement task.
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
2017-10-01
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Learning the trajectory of a moving visual target and evolution of its tracking in the monkey
Bourrelly, Clara; Quinet, Julie; Cavanagh, Patrick
2016-01-01
An object moving in the visual field triggers a saccade that brings its image onto the fovea. It is followed by a combination of slow eye movements and catch-up saccades that try to keep the target image on the fovea as long as possible. The accuracy of this ability to track the “here-and-now” location of a visual target contrasts with the spatiotemporally distributed nature of its encoding in the brain. We show in six experimentally naive monkeys how this performance is acquired and gradually evolves during successive daily sessions. During the early exposure, the tracking is mostly saltatory, made of relatively large saccades separated by low eye velocity episodes, demonstrating that accurate (here and now) pursuit is not spontaneous and that gaze direction lags behind its location most of the time. Over the sessions, while the pursuit velocity is enhanced, the gaze is more frequently directed toward the current target location as a consequence of a 25% reduction in the number of catch-up saccades and a 37% reduction in size (for the first saccade). This smoothing is observed at several scales: during the course of single trials, across the set of trials within a session, and over successive sessions. We explain the neurophysiological processes responsible for this combined evolution of saccades and pursuit in the absence of stringent training constraints. More generally, our study shows that the oculomotor system can be used to discover the neural mechanisms underlying the ability to synchronize a motor effector with a dynamic external event. PMID:27683886
Ocular dynamics and visual tracking performance after Q-switched laser exposure
NASA Astrophysics Data System (ADS)
Zwick, Harry; Stuck, Bruce E.; Lund, David J.; Nawim, Maqsood
2001-05-01
In previous investigations of q-switched laser retinal exposure in awake task oriented non-human primates (NHPs), the threshold for retinal damage occurred well below that of the threshold for permanent visual function loss. Visual function measures used in these studies involved measures of visual acuity and contrast sensitivity. In the present study, we examine the same relationship for q-switched laser exposure using a visual performance task, where task dependency involves more parafoveal than foveal retina. NHPs were trained on a visual pursuit motor tracking performance task that required maintaining a small HeNe laser spot (0.3 degrees) centered in a slowly moving (0.5deg/sec) annulus. When NHPs reliably produced visual target tracking efficiencies > 80%, single q-switched laser exposures (7 nsec) were made coaxially with the line of sight of the moving target. An infrared camera imaged the pupil during exposure to obtain the pupillary response to the laser flash. Retinal images were obtained with a scanning laser ophthalmoscope 3 days post exposure under ketamine and nembutol anesthesia. Q-switched visible laser exposures at twice the damage threshold produced small (about 50mm) retinal lesions temporal to the fovea; deficits in NHP visual pursuit tracking were transient, demonstrating full recovery to baseline within a single tracking session. Post exposure analysis of the pupillary response demonstrated that the exposure flash entered the pupil, followed by 90 msec refractory period and than a 12 % pupillary contraction within 1.5 sec from the onset of laser exposure. At 6 times the morphological threshold damage level for 532 nm q-switched exposure, longer term losses in NHP pursuit tracking performance were observed. In summary, q-switched laser exposure appears to have a higher threshold for permanent visual performance loss than the corresponding threshold to produce retinal threshold injury. Mechanisms of neural plasticity within the retina and at higher visual brain centers may mediat
Selective enhancement of orientation tuning before saccades.
Ohl, Sven; Kuper, Clara; Rolfs, Martin
2017-11-01
Saccadic eye movements cause a rapid sweep of the visual image across the retina and bring the saccade's target into high-acuity foveal vision. Even before saccade onset, visual processing is selectively prioritized at the saccade target. To determine how this presaccadic attention shift exerts its influence on visual selection, we compare the dynamics of perceptual tuning curves before movement onset at the saccade target and in the opposite hemifield. Participants monitored a 30-Hz sequence of randomly oriented gratings for a target orientation. Combining a reverse correlation technique previously used to study orientation tuning in neurons and general additive mixed modeling, we found that perceptual reports were tuned to the target orientation. The gain of orientation tuning increased markedly within the last 100 ms before saccade onset. In addition, we observed finer orientation tuning right before saccade onset. This increase in gain and tuning occurred at the saccade target location and was not observed at the incongruent location in the opposite hemifield. The present findings suggest, therefore, that presaccadic attention exerts its influence on vision in a spatially and feature-selective manner, enhancing performance and sharpening feature tuning at the future gaze location before the eyes start moving.
Inhibitory guidance in visual search: the case of movement-form conjunctions.
Dent, Kevin; Allen, Harriet A; Braithwaite, Jason J; Humphreys, Glyn W
2012-02-01
We used a probe-dot procedure to examine the roles of excitatory attentional guidance and distractor suppression in search for movement-form conjunctions. Participants in Experiment 1 completed a conjunction (moving X amongst moving Os and static Xs) and two single-feature (moving X amongst moving Os, and static X amongst static Os) conditions. "Active" participants searched for the target, whereas "passive" participants viewed the displays without responding. Subsequently, both groups located (left or right) a probe dot appearing in either an occupied or an unoccupied location. In the conjunction condition, the active group located probes presented on static distractors more slowly than probes presented on moving distractors, reversing the direction of the difference found within the passive group. This disadvantage for probes on static items was much stronger in conjunction than in single-feature search. The same pattern of results was replicated in Experiment 2, which used a go/no-go procedure. Experiment 3 extended the go/no-go procedure to the case of search for a static target and revealed increased probe localisation times as a consequence of active search, primarily for probes on moving distractor items. The results demonstrated attentional guidance by inhibition of distractors in conjunction search.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets
ERIC Educational Resources Information Center
Wang, Huadong
2013-01-01
In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future…
Real-time visual target tracking: two implementations of velocity-based smooth pursuit
NASA Astrophysics Data System (ADS)
Etienne-Cummings, Ralph; Longo, Paul; Van der Spiegel, Jan; Mueller, Paul
1995-06-01
Two systems for velocity-based visual target tracking are presented. The first two computational layers of both implementations are composed of VLSI photoreceptors (logarithmic compression) and edge detection (difference-of-Gaussians) arrays that mimic the outer-plexiform layer of mammalian retinas. The subsequent processing layers for measuring the target velocity and to realize smooth pursuit tracking are implemented in software and at the focal plane in the two versions, respectively. One implentation uses a hybrid of a PC and a silicon retina (39 X 38 pixels) operating at 333 frames/second. The software implementation of a real-time optical flow measurement algorithm is used to determine the target velocity, and a closed-loop control system zeroes the relative velocity of the target and retina. The second implementation is a single VLSI chip, which contains a linear array of photoreceptors, edge detectors and motion detectors at the focal plane. The closed-loop control system is also included on chip. This chip realizes all the computational properties of the hybrid system. The effects of background motion, target occlusion, and disappearance are studied as a function of retinal size and spatial distribution of the measured motion vectors (i.e. foveal/peripheral and diverging/converging measurement schemes). The hybrid system, which tested successfully, tracks targets moving as fast as 3 m/s at 1.3 meters from the camera and it can compensate for external arbitrary movements in its mounting platform. The single chip version, whose circuits tested successfully, can handle targets moving at 10 m/s.
Automated 3D trajectory measuring of large numbers of moving particles.
Wu, Hai Shan; Zhao, Qi; Zou, Danping; Chen, Yan Qiu
2011-04-11
Complex dynamics of natural particle systems, such as insect swarms, bird flocks, fish schools, has attracted great attention of scientists for years. Measuring 3D trajectory of each individual in a group is vital for quantitative study of their dynamic properties, yet such empirical data is rare mainly due to the challenges of maintaining the identities of large numbers of individuals with similar visual features and frequent occlusions. We here present an automatic and efficient algorithm to track 3D motion trajectories of large numbers of moving particles using two video cameras. Our method solves this problem by formulating it as three linear assignment problems (LAP). For each video sequence, the first LAP obtains 2D tracks of moving targets and is able to maintain target identities in the presence of occlusions; the second one matches the visually similar targets across two views via a novel technique named maximum epipolar co-motion length (MECL), which is not only able to effectively reduce matching ambiguity but also further diminish the influence of frequent occlusions; the last one links 3D track segments into complete trajectories via computing a globally optimal assignment based on temporal and kinematic cues. Experiment results on simulated particle swarms with various particle densities validated the accuracy and robustness of the proposed method. As real-world case, our method successfully acquired 3D flight paths of fruit fly (Drosophila melanogaster) group comprising hundreds of freely flying individuals. © 2011 Optical Society of America
Rotary acceleration of a subject inhibits choice reaction time to motion in peripheral vision
NASA Technical Reports Server (NTRS)
Borkenhagen, J. M.
1974-01-01
Twelve pilots were tested in a rotation device with visual simulation, alone and in combination with rotary stimulation, in experiments with variable levels of acceleration and variable viewing angles, in a study of the effect of S's rotary acceleration on the choice reaction time for an accelerating target in peripheral vision. The pilots responded to the direction of the visual motion by moving a hand controller to the right or left. Visual-plus-rotary stimulation required a longer choice reaction time, which was inversely related to the level of acceleration and directly proportional to the viewing angle.
Bongianni, Wayne L.
1984-01-01
A method and apparatus for electronically focusing and electronically scanning microscopic specimens are given. In the invention, visual images of even moving, living, opaque specimens can be acoustically obtained and viewed with virtually no time needed for processing (i.e., real time processing is used). And planar samples are not required. The specimens (if planar) need not be moved during scanning, although it will be desirable and possible to move or rotate nonplanar specimens (e.g., laser fusion targets) against the lens of the apparatus. No coupling fluid is needed, so specimens need not be wetted. A phase acoustic microscope is also made from the basic microscope components together with electronic mixers.
Bongianni, W.L.
1984-04-17
A method and apparatus for electronically focusing and electronically scanning microscopic specimens are given. In the invention, visual images of even moving, living, opaque specimens can be acoustically obtained and viewed with virtually no time needed for processing (i.e., real time processing is used). And planar samples are not required. The specimens (if planar) need not be moved during scanning, although it will be desirable and possible to move or rotate nonplanar specimens (e.g., laser fusion targets) against the lens of the apparatus. No coupling fluid is needed, so specimens need not be wetted. A phase acoustic microscope is also made from the basic microscope components together with electronic mixers. 7 figs.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
Mijatović, Antonija; La Scaleia, Barbara; Mercuri, Nicola; Lacquaniti, Francesco; Zago, Myrka
2014-12-01
Familiarity with the visual environment affects our expectations about the objects in a scene, aiding in recognition and interaction. Here we tested whether the familiarity with the specific trajectory followed by a moving target facilitates the interpretation of the effects of underlying physical forces. Participants intercepted a target sliding down either an inclined plane or a tautochrone. Gravity accelerated the target by the same amount in both cases, but the inclined plane represented a familiar trajectory whereas the tautochrone was unfamiliar to the participants. In separate sessions, the gravity field was consistent with either natural gravity or artificial reversed gravity. Target motion was occluded from view over the last segment. We found that the responses in the session with unnatural forces were systematically delayed relative to those with natural forces, but only for the inclined plane. The time shift is consistent with a bias for natural gravity, in so far as it reflects an a priori expectation that a target not affected by natural forces will arrive later than one accelerated downwards by gravity. Instead, we did not find any significant time shift with unnatural forces in the case of the tautochrone. We argue that interception of a moving target relies on the integration of the high-level cue of trajectory familiarity with low-level cues related to target kinematics.
Brockmole, James R; Henderson, John M
2006-07-01
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.
Enhanced compressed sensing for visual target tracking in wireless visual sensor networks
NASA Astrophysics Data System (ADS)
Qiang, Guo
2017-11-01
Moving object tracking in wireless sensor networks (WSNs) has been widely applied in various fields. Designing low-power WSNs for the limited resources of the sensor, such as energy limitation, energy restriction, and bandwidth constraints, is of high priority. However, most existing works focus on only single conflicting optimization criteria. An efficient compressive sensing technique based on a customized memory gradient pursuit algorithm with early termination in WSNs is presented, which strikes compelling trade-offs among energy dissipation for wireless transmission, certain types of bandwidth, and minimum storage. Then, the proposed approach adopts an unscented particle filter to predict the location of the target. The experimental results with a theoretical analysis demonstrate the substantially superior effectiveness of the proposed model and framework in regard to the energy and speed under the resource limitation of a visual sensor node.
Crowding by Invisible Flankers
Ho, Cristy; Cheung, Sing-Hang
2011-01-01
Background Human object recognition degrades sharply as the target object moves from central vision into peripheral vision. In particular, one's ability to recognize a peripheral target is severely impaired by the presence of flanking objects, a phenomenon known as visual crowding. Recent studies on how visual awareness of flanker existence influences crowding had shown mixed results. More importantly, it is not known whether conscious awareness of the existence of both the target and flankers are necessary for crowding to occur. Methodology/Principal Findings Here we show that crowding persists even when people are completely unaware of the flankers, which are rendered invisible through the continuous flash suppression technique. Contrast threshold for identifying the orientation of a grating pattern was elevated in the flanked condition, even when the subjects reported that they were unaware of the perceptually suppressed flankers. Moreover, we find that orientation-specific adaptation is attenuated by flankers even when both the target and flankers are invisible. Conclusions These findings complement the suggested correlation between crowding and visual awareness. What's more, our results demonstrate that conscious awareness and attention are not prerequisite for crowding. PMID:22194919
NASA Technical Reports Server (NTRS)
Lewis, Steven J.; Palacios, David M.
2013-01-01
This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).
Probe Scanning Support System by a Parallel Mechanism for Robotic Echography
NASA Astrophysics Data System (ADS)
Aoki, Yusuke; Kaneko, Kenta; Oyamada, Masami; Takachi, Yuuki; Masuda, Kohji
We propose a probe scanning support system based on force/visual servoing control for robotic echography. First, we have designed and formulated its inverse kinematics the construction of mechanism. Next, we have developed a scanning method of the ultrasound probe on body surface to construct visual servo system based on acquired echogram by the standalone medical robot to move the ultrasound probe on patient abdomen in three-dimension. The visual servo system detects local change of brightness in time series echogram, which is stabilized the position of the probe by conventional force servo system in the robot, to compensate not only periodical respiration motion but also body motion. Then we integrated control method of the visual servo with the force servo as a hybrid control in both of position and force. To confirm the ability to apply for actual abdomen, we experimented the total system to follow the gallbladder as a moving target to keep its position in the echogram by minimizing variation of reaction force on abdomen. As the result, the system has a potential to be applied to automatic detection of human internal organ.
Prey Capture Behavior Evoked by Simple Visual Stimuli in Larval Zebrafish
Bianco, Isaac H.; Kampff, Adam R.; Engert, Florian
2011-01-01
Understanding how the nervous system recognizes salient stimuli in the environment and selects and executes the appropriate behavioral responses is a fundamental question in systems neuroscience. To facilitate the neuroethological study of visually guided behavior in larval zebrafish, we developed “virtual reality” assays in which precisely controlled visual cues can be presented to larvae whilst their behavior is automatically monitored using machine vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼20°) toward small moving spots (1°) but reacted to larger spots (10°) with high-amplitude aversive turns (∼60°). The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analyzing movie sequences of larvae hunting paramecia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behavior in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey. PMID:22203793
Visual encoding and fixation target selection in free viewing: presaccadic brain potentials
Nikolaev, Andrey R.; Jurica, Peter; Nakatani, Chie; Plomp, Gijs; van Leeuwen, Cees
2013-01-01
In scrutinizing a scene, the eyes alternate between fixations and saccades. During a fixation, two component processes can be distinguished: visual encoding and selection of the next fixation target. We aimed to distinguish the neural correlates of these processes in the electrical brain activity prior to a saccade onset. Participants viewed color photographs of natural scenes, in preparation for a change detection task. Then, for each participant and each scene we computed an image heat map, with temperature representing the duration and density of fixations. The temperature difference between the start and end points of saccades was taken as a measure of the expected task-relevance of the information concentrated in specific regions of a scene. Visual encoding was evaluated according to whether subsequent change was correctly detected. Saccades with larger temperature difference were more likely to be followed by correct detection than ones with smaller temperature differences. The amplitude of presaccadic activity over anterior brain areas was larger for correct detection than for detection failure. This difference was observed for short “scrutinizing” but not for long “explorative” saccades, suggesting that presaccadic activity reflects top-down saccade guidance. Thus, successful encoding requires local scanning of scene regions which are expected to be task-relevant. Next, we evaluated fixation target selection. Saccades “moving up” in temperature were preceded by presaccadic activity of higher amplitude than those “moving down”. This finding suggests that presaccadic activity reflects attention deployed to the following fixation location. Our findings illustrate how presaccadic activity can elucidate concurrent brain processes related to the immediate goal of planning the next saccade and the larger-scale goal of constructing a robust representation of the visual scene. PMID:23818877
Huang, Chien-Ting; Hwang, Ing-Shiou
2012-01-01
Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498
A binary motor imagery tasks based brain-computer interface for two-dimensional movement control
NASA Astrophysics Data System (ADS)
Xia, Bin; Cao, Lei; Maysam, Oladazimi; Li, Jie; Xie, Hong; Su, Caixia; Birbaumer, Niels
2017-12-01
Objective. Two-dimensional movement control is a popular issue in brain-computer interface (BCI) research and has many applications in the real world. In this paper, we introduce a combined control strategy to a binary class-based BCI system that allows the user to move a cursor in a two-dimensional (2D) plane. Users focus on a single moving vector to control 2D movement instead of controlling vertical and horizontal movement separately. Approach. Five participants took part in a fixed-target experiment and random-target experiment to verify the effectiveness of the combination control strategy under the fixed and random routine conditions. Both experiments were performed in a virtual 2D dimensional environment and visual feedback was provided on the screen. Main results. The five participants achieved an average hit rate of 98.9% and 99.4% for the fixed-target experiment and the random-target experiment, respectively. Significance. The results demonstrate that participants could move the cursor in the 2D plane effectively. The proposed control strategy is based only on a basic two-motor imagery BCI, which enables more people to use it in real-life applications.
Contrast and assimilation in motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-09-01
The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.
Target size matters: target errors contribute to the generalization of implicit visuomotor learning.
Reichenthal, Maayan; Avraham, Guy; Karniel, Amir; Shmuelof, Lior
2016-08-01
The process of sensorimotor adaptation is considered to be driven by errors. While sensory prediction errors, defined as the difference between the planned and the actual movement of the cursor, drive implicit learning processes, target errors (e.g., the distance of the cursor from the target) are thought to drive explicit learning mechanisms. This distinction was mainly studied in the context of arm reaching tasks where the position and the size of the target were constant. We hypothesize that in a dynamic reaching environment, where subjects have to hit moving targets and the targets' dynamic characteristics affect task success, implicit processes will benefit from target errors as well. We examine the effect of target errors on learning of an unnoticed perturbation during unconstrained reaching movements. Subjects played a Pong game, in which they had to hit a moving ball by moving a paddle controlled by their hand. During the game, the movement of the paddle was gradually rotated with respect to the hand, reaching a final rotation of 25°. Subjects were assigned to one of two groups: The high-target error group played the Pong with a small ball, and the low-target error group played with a big ball. Before and after the Pong game, subjects performed open-loop reaching movements toward static targets with no visual feedback. While both groups adapted to the rotation, the postrotation reaching movements were directionally biased only in the small-ball group. This result provides evidence that implicit adaptation is sensitive to target errors. Copyright © 2016 the American Physiological Society.
Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco
2015-02-01
Manual interceptions are known to depend critically on integration of visual feedback information and experience-based predictions of the interceptive event. Within this framework, coupling between gaze and limb movements might also contribute to the interceptive outcome, since eye movements afford acquisition of high-resolution visual information. We investigated this issue by analyzing subjects' head-fixed oculomotor behavior during manual interceptions. Subjects moved a mouse cursor to intercept computer-generated ballistic trajectories either congruent with Earth's gravity or perturbed with weightlessness (0 g) or hypergravity (2 g) effects. In separate sessions, trajectories were either fully visible or occluded before interception to enforce visual prediction. Subjects' oculomotor behavior was classified in terms of amounts of time they gazed at different visual targets and of overall number of saccades. Then, by way of multivariate analyses, we assessed the following: (1) whether eye movement patterns depended on targets' laws of motion and occlusions; and (2) whether interceptive performance was related to the oculomotor behavior. First, we found that eye movement patterns depended significantly on targets' laws of motion and occlusion, suggesting predictive mechanisms. Second, subjects coupled differently oculomotor and interceptive behavior depending on whether targets were visible or occluded. With visible targets, subjects made smaller interceptive errors if they gazed longer at the mouse cursor. Instead, with occluded targets, they achieved better performance by increasing the target's tracking accuracy and by avoiding gaze shifts near interception, suggesting that precise ocular tracking provided better trajectory predictions for the interceptive response.
Thinking of God Moves Attention
ERIC Educational Resources Information Center
Chasteen, Alison L.; Burdzy, Donna C.; Pratt, Jay
2010-01-01
The concepts of God and Devil are well known across many cultures and religions, and often involve spatial metaphors, but it is not well known if our mental representations of these concepts affect visual cognition. To examine if exposure to divine concepts produces shifts of attention, participants completed a target detection task in which they…
Visuo-motor coordination and internal models for object interception.
Zago, Myrka; McIntyre, Joseph; Senot, Patrice; Lacquaniti, Francesco
2009-02-01
Intercepting and avoiding collisions with moving objects are fundamental skills in daily life. Anticipatory behavior is required because of significant delays in transforming sensory information about target and body motion into a timed motor response. The ability to predict the kinematics and kinetics of interception or avoidance hundreds of milliseconds before the event may depend on several different sources of information and on different strategies of sensory-motor coordination. What are exactly the sources of spatio-temporal information and what are the control strategies remain controversial issues. Indeed, these topics have been the battlefield of contrasting views on how the brain interprets visual information to guide movement. Here we attempt a synthetic overview of the vast literature on interception. We discuss in detail the behavioral and neurophysiological aspects of interception of targets falling under gravity, as this topic has received special attention in recent years. We show that visual cues alone are insufficient to predict the time and place of interception or avoidance, and they need to be supplemented by prior knowledge (or internal models) about several features of the dynamic interaction with the moving object.
The Speed of Serial Attention Shifts in Visual Search: Evidence from the N2pc Component.
Grubert, Anna; Eimer, Martin
2016-02-01
Finding target objects among distractors in visual search display is often assumed to be based on sequential movements of attention between different objects. However, the speed of such serial attention shifts is still under dispute. We employed a search task that encouraged the successive allocation of attention to two target objects in the same search display and measured N2pc components to determine how fast attention moved between these objects. Each display contained one digit in a known color (fixed-color target) and another digit whose color changed unpredictably across trials (variable-color target) together with two gray distractor digits. Participants' task was to find the fixed-color digit and compare its numerical value with that of the variable-color digit. N2pc components to fixed-color targets preceded N2pc components to variable-color digits, demonstrating that these two targets were indeed selected in a fixed serial order. The N2pc to variable-color digits emerged approximately 60 msec after the N2pc to fixed-color digits, which shows that attention can be reallocated very rapidly between different target objects in the visual field. When search display durations were increased, thereby relaxing the temporal demands on serial selection, the two N2pc components to fixed-color and variable-color targets were elicited within 90 msec of each other. Results demonstrate that sequential shifts of attention between different target locations can operate very rapidly at speeds that are in line with the assumptions of serial selection models of visual search.
Properties of visual evoked potentials to onset of movement on a television screen.
Kubová, Z; Kuba, M; Hubacek, J; Vít, F
1990-08-01
In 80 subjects the dependence of movement-onset visual evoked potentials on some measures of stimulation was examined, and these responses were compared with pattern-reversal visual evoked potentials to verify the effectiveness of pattern movement application for visual evoked potential acquisition. Horizontally moving vertical gratings were generated on a television screen. The typical movement-onset reactions were characterized by one marked negative peak only, with a peak time between 140 and 200 ms. In all subjects the sufficient stimulus duration for acquisition of movement-onset-related visual evoked potentials was 100 ms; in some cases it was only 20 ms. Higher velocity (5.6 degree/s) produced higher amplitudes of movement-onset visual evoked potentials than did the lower velocity (2.8 degrees/s). In 80% of subjects, the more distinct reactions were found in the leads from lateral occipital areas (in 60% from the right hemisphere), with no correlation to handedness of subjects. Unlike pattern-reversal visual evoked potentials, the movement-onset responses tended to be larger to extramacular stimulation (annular target of 5 degrees-9 degrees) than to macular stimulation (circular target of 5 degrees diameter).
de Senneville, Baudouin Denis; Mougenot, Charles; Moonen, Chrit T W
2007-02-01
Focused ultrasound (US) is a unique and noninvasive technique for local deposition of thermal energy deep inside the body. MRI guidance offers the additional benefits of excellent target visualization and continuous temperature mapping. However, treating a moving target poses severe problems because 1) motion-related thermometry artifacts must be corrected, 2) the US focal point must be relocated according to the target displacement. In this paper a complete MRI-compatible, high-intensity focused US (HIFU) system is described together with adaptive methods that allow continuous MR thermometry and therapeutic US with real-time tracking of a moving target, online motion correction of the thermometry maps, and regional temperature control based on the proportional, integral, and derivative method. The hardware is based on a 256-element phased-array transducer with rapid electronic displacement of the focal point. The exact location of the target during US firing is anticipated using automatic analysis of periodic motions. The methods were tested with moving phantoms undergoing either rigid body or elastic periodical motions. The results show accurate tracking of the focal point. Focal and regional temperature control is demonstrated with a performance similar to that obtained with stationary phantoms. Copyright (c) 2007 Wiley-Liss, Inc.
Can representational trajectory reveal the nature of an internal model of gravity?
De Sá Teixeira, Nuno; Hecht, Heiko
2014-05-01
The memory for the vanishing location of a horizontally moving target is usually displaced forward in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, this downward displacement has been shown to increase with time (representational trajectory). However, the degree to which different kinematic events change the temporal profile of these displacements remains to be determined. The present article attempts to fill this gap. In the first experiment, we replicate the finding that representational momentum for downward-moving targets is bigger than for upward motions, showing, moreover, that it increases rapidly during the first 300 ms, stabilizing afterward. This temporal profile, but not the increased error for descending targets, is shown to be disrupted when eye movements are not allowed. In the second experiment, we show that the downward drift with time emerges even for static targets. Finally, in the third experiment, we report an increased error for upward-moving targets, as compared with downward movements, when the display is compatible with a downward ego-motion by including vection cues. Thus, the errors in the direction of gravity are compatible with the perceived event and do not merely reflect a retinotopic bias. Overall, these results provide further evidence for an internal model of gravity in the visual representational system.
Task relevance predicts gaze in videos of real moving scenes.
Howard, Christina J; Gilchrist, Iain D; Troscianko, Tom; Behera, Ardhendu; Hogg, David C
2011-09-01
Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.
Eye movements and the span of the effective stimulus in visual search.
Bertera, J H; Rayner, K
2000-04-01
The span of the effective stimulus during visual search through an unstructured alphanumeric array was investigated by using eye-contingent-display changes while the subjects searched for a target letter. In one condition, a window exposing the search array moved in synchrony with the subjects' eye movements, and the size of the window was varied. Performance reached asymptotic levels when the window was 5 degrees. In another condition, a foveal mask moved in synchrony with each eye movement, and the size of the mask was varied. The foveal mask conditions were much more detrimental to search behavior than the window conditions, indicating the importance of foveal vision during search. The size of the array also influenced performance, but performance reached asymptote for all array sizes tested at the same window size, and the effect of the foveal mask was the same for all array sizes. The results indicate that both acuity and difficulty of the search task influenced the span of the effective stimulus during visual search.
Moving attention - Evidence for time-invariant shifts of visual selective attention
NASA Technical Reports Server (NTRS)
Remington, R.; Pierce, L.
1984-01-01
Two experiments measured the time to shift spatial selective attention across the visual field to targets 2 or 10 deg from central fixation. A central arrow cued the most likely target location. The direction of attention was inferred from reaction times to expected, unexpected, and neutral locations. The development of a spatial attentional set with time was examined by presenting target probes at varying times after the cue. There were no effects of distance on the time course of the attentional set. Reaction times for far locations were slower than for near, but the effects of attention were evident by 150 msec in both cases. Spatial attention does not shift with a characteristic, fixed velocity. Rather, velocity is proportional to distance, resulting in a movement time that is invariant over the distances tested.
The Orbital Maneuvering Vehicle Training Facility visual system concept
NASA Technical Reports Server (NTRS)
Williams, Keith
1989-01-01
The purpose of the Orbital Maneuvering Vehicle (OMV) Training Facility (OTF) is to provide effective training for OMV pilots. A critical part of the training environment is the Visual System, which will simulate the video scenes produced by the OMV Closed-Circuit Television (CCTV) system. The simulation will include camera models, dynamic target models, moving appendages, and scene degradation due to the compression/decompression of video signal. Video system malfunctions will also be provided to ensure that the pilot is ready to meet all challenges the real-world might provide. One possible visual system configuration for the training facility that will meet existing requirements is described.
Dealing with delays does not transfer across sensorimotor tasks.
de la Malla, Cristina; López-Moliner, Joan; Brenner, Eli
2014-10-09
It is known that people can learn to deal with delays between their actions and the consequences of such actions. We wondered whether they do so by adjusting their anticipations about the sensory consequences of their actions or whether they simply learn to move in certain ways when performing specific tasks. To find out, we examined details of how people learn to intercept a moving target with a cursor that follows the hand with a delay and examined the transfer of learning between this task and various other tasks that require temporal precision. Subjects readily learned to intercept the moving target with the delayed cursor. The compensation for the delay generalized across modifications of the task, so subjects did not simply learn to move in a certain way in specific circumstances. The compensation did not generalize to completely different timing tasks, so subjects did not generally expect the consequences of their motor commands to be delayed. We conclude that people specifically learn to control the delayed visual consequences of their actions to perform certain tasks. © 2014 ARVO.
Extrafoveal preview benefit during free-viewing visual search in the monkey
Krishna, B. Suresh; Ipata, Anna E.; Bisley, James W.; Gottlieb, Jacqueline; Goldberg, Michael E.
2014-01-01
Abstract Previous studies have shown that subjects require less time to process a stimulus at the fovea after a saccade if they have viewed the same stimulus in the periphery immediately prior to the saccade. This extrafoveal preview benefit indicates that information about the visual form of an extrafoveally viewed stimulus can be transferred across a saccade. Here, we extend these findings by demonstrating and characterizing a similar extrafoveal preview benefit in monkeys during a free-viewing visual search task. We trained two monkeys to report the orientation of a target among distractors by releasing one of two bars with their hand; monkeys were free to move their eyes during the task. Both monkeys took less time to indicate the orientation of the target after foveating it, when the target lay closer to the fovea during the previous fixation. An extrafoveal preview benefit emerged even if there was more than one intervening saccade between the preview and the target fixation, indicating that information about target identity could be transferred across more than one saccade and could be obtained even if the search target was not the goal of the next saccade. An extrafoveal preview benefit was also found for distractor stimuli. These results aid future physiological investigations of the extrafoveal preview benefit. PMID:24403392
Wang, Ching-Yi; Hwang, Wen-Juh; Fang, Jing-Jing; Sheu, Ching-Fan; Leong, Iat-Fai; Ma, Hui-Ing
2011-08-01
To compare the performance of reaching for stationary and moving targets in virtual reality (VR) and physical reality in persons with Parkinson's disease (PD). A repeated-measures design in which all participants reached in physical reality and VR under 5 conditions: 1 stationary ball condition and 4 conditions with the ball moving at different speeds. University research laboratory. Persons with idiopathic PD (n=29) and age-matched controls (n=25). Not applicable. Success rates and kinematics of arm movement (movement time, amplitude of peak velocity, and percentage of movement time for acceleration phase). In both VR and physical reality, the PD group had longer movement time (P<.001) and lower peak velocity (P<.001) than the controls when reaching for stationary balls. When moving targets were provided, the PD group improved more than the controls did in movement time (P<.001) and peak velocity (P<.001), and reached a performance level similar to that of the controls. Except for the fastest moving ball condition (0.5-s target viewing time), which elicited worse performance in VR than in physical reality, most cueing conditions in VR elicited performance generally similar to those in physical reality. Although slower than the controls when reaching for stationary balls, persons with PD increased movement speed in response to fast moving balls in both VR and physical reality. This suggests that with an appropriate choice of cueing speed, VR is a promising tool for providing visual motion stimuli to improve movement speed in persons with PD. More research on the long-term effect of this type of VR training program is needed. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Hillstrom, Anne P; Segabinazi, Joice D; Godwin, Hayward J; Liversedge, Simon P; Benson, Valerie
2017-02-19
We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
How barn owls (Tyto alba) visually follow moving voles (Microtus socialis) before attacking them.
Fux, Michal; Eilam, David
2009-09-07
The present study focused on the movements that owls perform before they swoop down on their prey. The working hypothesis was that owl head movements reflect the capacity to efficiently follow visually and auditory a moving prey. To test this hypothesis, five tame barn owls (Tyto alba) were each exposed 10 times to a live vole in a laboratory setting that enabled us to simultaneously record the behavior of both owl and vole. Bi-dimensional analysis of the horizontal and vertical projections of movements revealed that owl head movements increased in amplitude parallel to the vole's direction of movement (sideways or away from/toward the owl). However, the owls also performed relatively large repetitive horizontal head movements when the voles were progressing in any direction, suggesting that these movements were critical for the owl to accurately locate the prey, independent of prey behavior. From the pattern of head movements we conclude that owls orient toward the prospective clash point, and then return to the target itself (the vole) - a pattern that fits an interception rather than a tracking mode of following a moving target. The large horizontal component of head movement in following live prey may indicate that barn owls either have a horizontally narrow fovea or that these movements serve in forming a motion parallax along with preserving image acuity on a horizontally wide fovea.
View-Dependent Streamline Deformation and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Xin; Edwards, John; Chen, Chun-Ming
Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual cluttering for visualizing 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures.more » Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.« less
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.
NASA Technical Reports Server (NTRS)
Grant, Michael P.; Leigh, R. John; Seidman, Scott H.; Riley, David E.; Hanna, Joseph P.
1992-01-01
We compared the ability of eight normal subjects and 15 patients with brainstem or cerebellar disease to follow a moving visual stimulus smoothly with either the eyes alone or with combined eye-head tracking. The visual stimulus was either a laser spot (horizontal and vertical planes) or a large rotating disc (torsional plane), which moved at one sinusoidal frequency for each subject. The visually enhanced Vestibulo-Ocular Reflex (VOR) was also measured in each plane. In the horizontal and vertical planes, we found that if tracking gain (gaze velocity/target velocity) for smooth pursuit was close to 1, the gain of combined eye-hand tracking was similar. If the tracking gain during smooth pursuit was less than about 0.7, combined eye-head tracking was usually superior. Most patients, irrespective of diagnosis, showed combined eye-head tracking that was superior to smooth pursuit; only two patients showed the converse. In the torsional plane, in which optokinetic responses were weak, combined eye-head tracking was much superior, and this was the case in both subjects and patients. We found that a linear model, in which an internal ocular tracking signal cancelled the VOR, could account for our findings in most normal subjects in the horizontal and vertical planes, but not in the torsional plane. The model failed to account for tracking behaviour in most patients in any plane, and suggested that the brain may use additional mechanisms to reduce the internal gain of the VOR during combined eye-head tracking. Our results confirm that certain patients who show impairment of smooth-pursuit eye movements preserve their ability to smoothly track a moving target with combined eye-head tracking.
The monocular visual imaging technology model applied in the airport surface surveillance
NASA Astrophysics Data System (ADS)
Qin, Zhe; Wang, Jian; Huang, Chao
2013-08-01
At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.
Visuomotor adaptation needs a validation of prediction error by feedback error
Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle
2014-01-01
The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644
Goh, Rachel L Z; Kong, Yu Xiang George; McAlinden, Colm; Liu, John; Crowston, Jonathan G; Skalicky, Simon E
2018-01-01
To evaluate the use of smartphone-based virtual reality to objectively assess activity limitation in glaucoma. Cross-sectional study of 93 patients (54 mild, 22 moderate, 17 severe glaucoma). Sociodemographics, visual parameters, Glaucoma Activity Limitation-9 and Visual Function Questionnaire - Utility Index (VFQ-UI) were collected. Mean age was 67.4 ± 13.2 years; 52.7% were male; 65.6% were driving. A smartphone placed inside virtual reality goggles was used to administer the Virtual Reality Glaucoma Visual Function Test (VR-GVFT) to participants, consisting of three parts: stationary, moving ball, driving. Rasch analysis and classical validity tests were conducted to assess performance of VR-GVFT. Twenty-four of 28 stationary test items showed acceptable fit to the Rasch model (person separation 3.02, targeting 0). Eleven of 12 moving ball test items showed acceptable fit (person separation 3.05, targeting 0). No driving test items showed acceptable fit. Stationary test person scores showed good criterion validity, differentiating between glaucoma severity groups ( P = 0.014); modest convergence validity, with mild to moderate correlation with VFQ-UI, better eye (BE) mean deviation, BE pattern deviation, BE central scotoma, worse eye (WE) visual acuity, and contrast sensitivity (CS) in both eyes ( R = 0.243-0.381); and suboptimal divergent validity. Multivariate analysis showed that lower WE CS ( P = 0.044) and greater age ( P = 0.009) were associated with worse stationary test person scores. Smartphone-based virtual reality may be a portable objective simulation test of activity limitation related to glaucomatous visual loss. The use of simulated virtual environments could help better understand the activity limitations that affect patients with glaucoma.
Goh, Rachel L. Z.; McAlinden, Colm; Liu, John; Crowston, Jonathan G.; Skalicky, Simon E.
2018-01-01
Purpose To evaluate the use of smartphone-based virtual reality to objectively assess activity limitation in glaucoma. Methods Cross-sectional study of 93 patients (54 mild, 22 moderate, 17 severe glaucoma). Sociodemographics, visual parameters, Glaucoma Activity Limitation-9 and Visual Function Questionnaire – Utility Index (VFQ-UI) were collected. Mean age was 67.4 ± 13.2 years; 52.7% were male; 65.6% were driving. A smartphone placed inside virtual reality goggles was used to administer the Virtual Reality Glaucoma Visual Function Test (VR-GVFT) to participants, consisting of three parts: stationary, moving ball, driving. Rasch analysis and classical validity tests were conducted to assess performance of VR-GVFT. Results Twenty-four of 28 stationary test items showed acceptable fit to the Rasch model (person separation 3.02, targeting 0). Eleven of 12 moving ball test items showed acceptable fit (person separation 3.05, targeting 0). No driving test items showed acceptable fit. Stationary test person scores showed good criterion validity, differentiating between glaucoma severity groups (P = 0.014); modest convergence validity, with mild to moderate correlation with VFQ-UI, better eye (BE) mean deviation, BE pattern deviation, BE central scotoma, worse eye (WE) visual acuity, and contrast sensitivity (CS) in both eyes (R = 0.243–0.381); and suboptimal divergent validity. Multivariate analysis showed that lower WE CS (P = 0.044) and greater age (P = 0.009) were associated with worse stationary test person scores. Conclusions Smartphone-based virtual reality may be a portable objective simulation test of activity limitation related to glaucomatous visual loss. Translational Relevance The use of simulated virtual environments could help better understand the activity limitations that affect patients with glaucoma. PMID:29372112
B-spline based image tracking by detection
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman
2016-05-01
Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.
Marginally perceptible outcome feedback, motor learning and implicit processes.
Masters, Rich S W; Maxwell, Jon P; Eves, Frank F
2009-09-01
Participants struck 500 golf balls to a concealed target. Outcome feedback was presented at the subjective or objective threshold of awareness of each participant or at a supraliminal threshold. Participants who received fully perceptible (supraliminal) feedback learned to strike the ball onto the target, as did participants who received feedback that was only marginally perceptible (subjective threshold). Participants who received feedback that was not perceptible (objective threshold) showed no learning. Upon transfer to a condition in which the target was unconcealed, performance increased in both the subjective and the objective threshold condition, but decreased in the supraliminal condition. In all three conditions, participants reported minimal declarative knowledge of their movements, suggesting that deliberate hypothesis testing about how best to move in order to perform the motor task successfully was disrupted by the impoverished disposition of the visual outcome feedback. It was concluded that sub-optimally perceptible visual feedback evokes implicit processes.
Catching What We Can't See: Manual Interception of Occluded Fly-Ball Trajectories
Bosco, Gianfranco; Delle Monache, Sergio; Lacquaniti, Francesco
2012-01-01
Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories. PMID:23166653
Catching what we can't see: manual interception of occluded fly-ball trajectories.
Bosco, Gianfranco; Delle Monache, Sergio; Lacquaniti, Francesco
2012-01-01
Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories.
Maffei, Vincenzo; Macaluso, Emiliano; Indovina, Iole; Orban, Guy; Lacquaniti, Francesco
2010-01-01
Neural substrates for processing constant speed visual motion have been extensively studied. Less is known about the brain activity patterns when the target speed changes continuously, for instance under the influence of gravity. Using functional MRI (fMRI), here we compared brain responses to accelerating/decelerating targets with the responses to constant speed targets. The target could move along the vertical under gravity (1g), under reversed gravity (-1g), or at constant speed (0g). In the first experiment, subjects observed targets moving in smooth motion and responded to a GO signal delivered at a random time after target arrival. As expected, we found that the timing of the motor responses did not depend significantly on the specific motion law. Therefore brain activity in the contrast between different motion laws was not related to motor timing responses. Average BOLD signals were significantly greater for 1g targets than either 0g or -1g targets in a distributed network including bilateral insulae, left lingual gyrus, and brain stem. Moreover, in these regions, the mean activity decreased monotonically from 1g to 0g and to -1g. In the second experiment, subjects intercepted 1g, 0g, and -1g targets either in smooth motion (RM) or in long-range apparent motion (LAM). We found that the sites in the right insula and left lingual gyrus, which were selectively engaged by 1g targets in the first experiment, were also significantly more active during 1g trials than during -1g trials both in RM and LAM. The activity in 0g trials was again intermediate between that in 1g trials and that in -1g trials. Therefore in these regions the global activity modulation with the law of vertical motion appears to hold for both RM and LAM. Instead, a region in the inferior parietal lobule showed a preference for visual gravitational motion only in LAM but not RM.
View-Dependent Streamline Deformation and Exploration
Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R.; Wong, Pak Chung
2016-01-01
Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely. PMID:26600061
View-Dependent Streamline Deformation and Exploration.
Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R; Wong, Pak Chung
2016-07-01
Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.
Hamilton, Roy H; Stark, Marianna; Coslett, H Branch
2010-01-01
Debate continues regarding the mechanisms underlying covert shifts of visual attention. We examined the relationship between target eccentricity and the speed of covert shifts of attention in normal subjects and patients with brain lesions using a cued-response task in which cues and targets were presented at 2 degrees or 8 degrees lateral to the fixation point. Normal subjects were slower on invalid trials in the 8 degrees as compared to 2 degrees condition. Patients with right-hemisphere stroke with neglect were slower in their responses to left-sided invalid targets compared to valid targets, and demonstrated a significant increase in the effect of target validity as a function of target eccentricity. Additional data from one neglect patient (JM) demonstrated an exaggerated validity x eccentricity x side interaction for contralesional targets on a cued reaction time task with a central (arrow) cue. We frame these results in the context of a continuous 'moving spotlight' model of attention, and also consider the potential role of spatial saliency maps. By either account, we argue that neglect is characterized by an eccentricity-dependent deficit in the allocation of attention.
Quantitative analysis of catch-up saccades during sustained pursuit.
de Brouwer, Sophie; Missal, Marcus; Barnes, Graham; Lefèvre, Philippe
2002-04-01
During visual tracking of a moving stimulus, primates orient their visual axis by combining two very different types of eye movements, smooth pursuit and saccades. The purpose of this paper was to investigate quantitatively the catch-up saccades occurring during sustained pursuit. We used a ramp-step-ramp paradigm to evoke catch-up saccades during sustained pursuit. In general, catch-up saccades followed the unexpected steps in position and velocity of the target. We observed catch-up saccades in the same direction as the smooth eye movement (forward saccades) as well as in the opposite direction (reverse saccades). We made a comparison of the main sequences of forward saccades, reverse saccades, and control saccades made to stationary targets. They were all three significantly different from each other and were fully compatible with the hypothesis that the smooth pursuit component is added to the saccadic component during catch-up saccades. A multiple linear regression analysis was performed on the saccadic component to find the parameters determining the amplitude of catch-up saccades. We found that both position error and retinal slip are taken into account in catch-up saccade programming to predict the future trajectory of the moving target. We also demonstrated that the saccadic system needs a minimum period of approximately 90 ms for taking into account changes in target trajectory. Finally, we reported a saturation (above 15 degrees /s) in the contribution of retinal slip to the amplitude of catch-up saccades.
The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task
Roverud, Elin; Streeter, Timothy; Mason, Christine R.; Kidd, Gerald
2017-01-01
The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of “real-world” communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targets, it is currently not known whether these benefits persist in the face of frequent changes in location of the target talker that are typical of conversational turn-taking. Participants were 14 young adults, 7 with normal hearing and 7 with bilateral sensorineural hearing impairment. Target stimuli were sequences of 12 question–answer pairs that were embedded in a mixture of competing conversations. The participant’s task was to respond via a key press after each answer indicating whether it was correct or not. Spatialization of the stimuli and microphone array processing were done offline using recorded impulse responses, before presentation over headphones. The look direction of the array was steered according to the eye movements of the participant as they followed a visual cue presented on a widescreen monitor. Performance was compared for a “dynamic” condition in which the target stimulus moved between three locations, and a “fixed” condition with a single target location. The benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced in the dynamic condition, largely because visual fixation was less accurate. PMID:28758567
Motion-induced blindness and microsaccades: cause and effect
Bonneh, Yoram S.; Donner, Tobias H.; Sagi, Dov; Fried, Moshe; Cooperman, Alexander; Heeger, David J; Arieli, Amos
2010-01-01
It has been suggested that subjective disappearance of visual stimuli results from a spontaneous reduction of microsaccade rate causing image stabilization, enhanced adaptation and a consequent fading. In motion-induced-blindness (MIB) salient visual targets disappear intermittently when surrounded by a moving pattern. We investigated whether changes in microsaccade rate can account for MIB. We first determined that the moving mask does not affect microsaccade metrics (rate, magnitude, and temporal distribution). We then compared the dynamics of microsaccades during reported illusory disappearance (MIB) and physical disappearance (Replay) of a salient peripheral target. We found large modulations of microsaccade rate following perceptual transitions, whether illusory (MIB) or real (Replay). For MIB, the rate also decreased prior to disappearance and increased prior to reappearance. Importantly, MIB persisted in the presence of microsaccades although sustained microsaccade rate was lower during invisible than visible periods. These results suggest that the microsaccade system reacts to changes in visibility, but microsaccades also modulate MIB. The latter modulation is well described by a Poisson model of the perceptual transitions assuming that the probability for reappearance and disappearance is modulated following a microsaccade. Our results show that microsaccades counteract disappearance, but are neither necessary nor sufficient to account for MIB. PMID:21172899
Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation
NASA Technical Reports Server (NTRS)
Cunningham, H. A.; Welch, Robert B.
1994-01-01
Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.
Modeling peripheral vision for moving target search and detection.
Yang, Ji Hyun; Huston, Jesse; Day, Michael; Balogh, Imre
2012-06-01
Most target search and detection models focus on foveal vision. In reality, peripheral vision plays a significant role, especially in detecting moving objects. There were 23 subjects who participated in experiments simulating target detection tasks in urban and rural environments while their gaze parameters were tracked. Button responses associated with foveal object and peripheral object (PO) detection and recognition were recorded. In an urban scenario, pedestrians appearing in the periphery holding guns were threats and pedestrians with empty hands were non-threats. In a rural scenario, non-U.S. unmanned aerial vehicles (UAVs) were considered threats and U.S. UAVs non-threats. On average, subjects missed detecting 2.48 POs among 50 POs in the urban scenario and 5.39 POs in the rural scenario. Both saccade reaction time and button reaction time can be predicted by peripheral angle and entrance speed of POs. Fast moving objects were detected faster than slower objects and POs appearing at wider angles took longer to detect than those closer to the gaze center. A second-order mixed-effect model was applied to provide each subject's prediction model for peripheral target detection performance as a function of eccentricity angle and speed. About half the subjects used active search patterns while the other half used passive search patterns. An interactive 3-D visualization tool was developed to provide a representation of macro-scale head and gaze movement in the search and target detection task. An experimentally validated stochastic model of peripheral vision in realistic target detection scenarios was developed.
Focused ultrasound: concept for automated transcutaneous control of hemorrhage in austere settings.
Kucewicz, John C; Bailey, Michael R; Kaczkowski, Peter J; Carter, Stephen J
2009-04-01
High intensity focused ultrasound (HIFU) is being developed for a range of clinical applications. Of particular interest to NASA and the military is the use of HIFU for traumatic injuries because HIFU has the unique ability to transcutaneously stop bleeding. Automation of this technology would make possible its use in remote, austere settings by personnel not specialized in medical ultrasound. Here a system to automatically detect and target bleeding is tested and reported. The system uses Doppler ultrasound images from a clinical ultrasound scanner for bleeding detection and hardware for HIFU therapy. The system was tested using a moving string to simulate blood flow and targeting was visualized by Schlieren imaging to show the focusing of the HIFU acoustic waves. When instructed by the operator, a Doppler ultrasound image is acquired and processed to detect and localize the moving string, and the focus of the HIFU array is electronically adjusted to target the string. Precise and accurate targeting was verified in the Schlieren images. An automated system to detect and target simulated bleeding has been built and tested. The system could be combined with existing algorithms to detect, target, and treat clinical bleeding.
De Sá Teixeira, Nuno Alexandre; Hecht, Heiko
2014-01-01
When people are asked to indicate the vanishing location of a moving target, errors in the direction of motion (representational momentum) and in the direction of gravity (representational gravity) are usually found. These errors possess a temporal course wherein the memory for the location of the target drifts downwards with increasing temporal intervals between target's disappearance and participant's responses (representational trajectory). To assess if representational trajectory is a body-referenced or a world-referenced phenomenon. A behavioral localization method was employed with retention times between 0 and 1400 ms systematically imposed after the target's disappearance. The target could move horizontally (rightwards or leftwards) or vertically (upwards or downwards). Body posture was varied in a counterbalanced order between sitting upright and lying on the side (left lateral decubitus position). In the upright task, the memory for target location drifted downwards with time in the direction of gravity. This time course did not emerge for the decubitus task, where idiotropic dominance was found. The dynamic visual representation of gravity is neither purely body-referenced nor world-referenced. It seems to be modulated instead by the relationship between the idiotropic vector and physical gravity.
Sacrey, Lori-Ann R; Bryson, Susan E; Zwaigenbaum, Lonnie
2013-11-01
Regulation of visual attention is essential to learning about one's environment. Children with autism spectrum disorder (ASD) exhibit impairments in regulating their visual attention, but little is known about how such impairments develop over time. This prospective longitudinal study is the first to describe the development of components of visual attention, including engaging, sustaining, and disengaging attention, in infants at high-risk of developing ASD (each with an older sibling with ASD). Non-sibling controls and high-risk infant siblings were filmed at 6, 9, 12, 15, 18, 24, and 36 months of age as they engaged in play with small, easily graspable toys. Duration of time spent looking at toy targets before moving the hand toward the target and the duration of time spent looking at the target after grasp were measured. At 36 months of age, an independent, gold standard diagnostic assessment for ASD was conducted for all participants. As predicted, infant siblings subsequently diagnosed with ASD were distinguished by prolonged latency to disengage ('sticky attention') by 12 months of age, and continued to show this characteristic at 15, 18, and 24 months of age. The results are discussed in relation to how the development of visual attention may impact later cognitive outcomes of children diagnosed with ASD. Copyright © 2013 Elsevier B.V. All rights reserved.
Effect of visuomotor-map uncertainty on visuomotor adaptation.
Saijo, Naoki; Gomi, Hiroaki
2012-03-01
Vision and proprioception contribute to generating hand movement. If a conflict between the visual and proprioceptive feedback of hand position is given, reaching movement is disturbed initially but recovers after training. Although previous studies have predominantly investigated the adaptive change in the motor output, it is unclear whether the contributions of visual and proprioceptive feedback controls to the reaching movement are modified by visuomotor adaptation. To investigate this, we focused on the change in proprioceptive feedback control associated with visuomotor adaptation. After the adaptation to gradually introduce visuomotor rotation, the hand reached the shifted position of the visual target to move the cursor to the visual target correctly. When the cursor feedback was occasionally eliminated (probe trial), the end point of the hand movement was biased in the visual-target direction, while the movement was initiated in the adapted direction, suggesting the incomplete adaptation of proprioceptive feedback control. Moreover, after the learning of uncertain visuomotor rotation, in which the rotation angle was randomly fluctuated on a trial-by-trial basis, the end-point bias in the probe trial increased, but the initial movement direction was not affected, suggesting a reduction in the adaptation level of proprioceptive feedback control. These results suggest that the change in the relative contribution of visual and proprioceptive feedback controls to the reaching movement in response to the visuomotor-map uncertainty is involved in visuomotor adaptation, whereas feedforward control might adapt in a manner different from that of the feedback control.
Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U
2016-01-01
The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263
A ground moving target emergency tracking method for catastrophe rescue
NASA Astrophysics Data System (ADS)
Zhou, X.; Li, D.; Li, G.
2014-11-01
In recent years, great disasters happen now and then. Disaster management test the emergency operation ability of the government and society all over the world. Immediately after the occurrence of a great disaster (e.g., earthquake), a massive nationwide rescue and relief operation need to be kicked off instantly. In order to improve the organizations efficiency of the emergency rescue, the organizers need to take charge of the information of the rescuer teams, including the real time location, the equipment with the team, the technical skills of the rescuers, and so on. One of the key factors for the success of emergency operations is the real time location of the rescuers dynamically. Real time tracking methods are used to track the professional rescuer teams now. But volunteers' participation play more and more important roles in great disasters. However, real time tracking of the volunteers will cause many problems, e.g., privacy leakage, expensive data consumption, etc. These problems may reduce the enthusiasm of volunteers' participation for catastrophe rescue. In fact, the great disaster is just small probability event, it is not necessary to track the volunteers (even rescuer teams) every time every day. In order to solve this problem, a ground moving target emergency tracking method for catastrophe rescue is presented in this paper. In this method, the handheld devices using GPS technology to provide the location of the users, e.g., smart phone, is used as the positioning equipment; an emergency tracking information database including the ID of the ground moving target (including the rescuer teams and volunteers), the communication number of the handheld devices with the moving target, and the usually living region, etc., is built in advance by registration; when catastrophe happens, the ground moving targets that living close to the disaster area will be filtered by the usually living region; then the activation short message will be sent to the selected ground moving target through the communication number of the handheld devices. The handheld devices receive and identify the activation short message, and send the current location information to the server. Therefore, the emergency tracking mode is triggered. The real time location of the filtered target can be shown on the organizer's screen, and the organizer can assign the rescue tasks to the rescuer teams and volunteers based on their real time location. The ground moving target emergency tracking prototype system is implemented using Oracle 11g, Visual Studio 2010 C#, Android, SMS Modem, and Google Maps API.
Binocular Perception of 2D Lateral Motion and Guidance of Coordinated Motor Behavior.
Fath, Aaron J; Snapp-Childs, Winona; Kountouriotis, Georgios K; Bingham, Geoffrey P
2016-04-01
Zannoli, Cass, Alais, and Mamassian (2012) found greater audiovisual lag between a tone and disparity-defined stimuli moving laterally (90-170 ms) than for disparity-defined stimuli moving in depth or luminance-defined stimuli moving laterally or in depth (50-60 ms). We tested if this increased lag presents an impediment to visually guided coordination with laterally moving objects. Participants used a joystick to move a virtual object in several constant relative phases with a laterally oscillating stimulus. Both the participant-controlled object and the target object were presented using a disparity-defined display that yielded information through changes in disparity over time (CDOT) or using a luminance-defined display that additionally provided information through monocular motion and interocular velocity differences (IOVD). Performance was comparable for both disparity-defined and luminance-defined displays in all relative phases. This suggests that, despite lag, perception of lateral motion through CDOT is generally sufficient to guide coordinated motor behavior.
Steering a virtual blowfly: simulation of visual pursuit.
Boeddeker, Norbert; Egelhaaf, Martin
2003-09-22
The behavioural repertoire of male flies includes visually guided chasing after moving targets. The visuomotor control system for these pursuits belongs to the fastest found in the animal kingdom. We simulated a virtual fly, to test whether or not experimentally established hypotheses on the underlying control system are sufficient to explain chasing behaviour. Two operating instructions for steering the chasing virtual fly were derived from behavioural experiments: (i) the retinal size of the target controls the fly's forward speed and, thus, indirectly its distance to the target; and (ii) a smooth pursuit system uses the retinal position of the target to regulate the fly's flight direction. Low-pass filters implement neuronal processing time. Treating the virtual fly as a point mass, its kinematics are modelled in consideration of the effects of translatory inertia and air friction. Despite its simplicity, the model shows behaviour similar to that of real flies. Depending on its starting position and orientation as well as on target size and speed, the virtual fly either catches the target or follows it indefinitely without capture. These two behavioural modes of the virtual fly emerge from the control system for flight steering without implementation of an explicit decision maker.
Reaching with cerebral tunnel vision.
Rizzo, M; Darling, W
1997-01-01
We studied reaching movements in a 48-year-old man with bilateral lesions of the calcarine cortex which spared the foveal representation and caused severe tunnel vision. Three-dimensional (3D) reconstruction of brain MR images showed no evidence of damage beyond area 18. The patient could not see his hand during reaching movements, providing a unique opportunity to test the role of peripheral visual cues in limb control. Optoelectronic recordings of upper limb movements showed normal hand paths and trajectories to fixated extrinsic targets. There was no slowing, tremor, or ataxia. Self-bound movements were also preserved. Analyses of limb orientation at the endpoints of reaches showed that the patient could transform an extrinsic target's visual coordinates to an appropriate upper limb configuration for target acquisition. There was no disadvantage created by blocking the view of the reaching arm. Moreover, the patient could not locate targets presented in the hemianopic fields by pointing. Thus, residual nonconscious vision or 'blindsight' in the aberrant fields was not a factor in our patient's reaching performance. The findings in this study show that peripheral visual cues on the position and velocity of the moving limb are not critical to the control of goal directed reaches, at least not until the hand is close to target. Other cues such as kinesthetic feedback can suffice. It also appears that the visuomotor transformations for reaching do not take place before area 19 in humans.
Visual attention is required for multiple object tracking.
Tran, Annie; Hoffman, James E
2016-12-01
In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Zago, Myrka; Lacquaniti, Francesco
2005-09-01
Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. However, there are limitations in the visual system that raise questions about the general validity of these theories. Most notably, vision is poorly sensitive to arbitrary accelerations. How then does the brain deal with the motion of objects accelerated by Earth's gravity? Here we review evidence in favor of the view that the brain makes the best estimate about target motion based on visually measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from the expected kinetics in the Earth's gravitational field.
Directional asymmetries in human smooth pursuit eye movements.
Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam
2013-06-27
Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
Ochiai, Tetsuji; Mushiake, Hajime; Tanji, Jun
2005-07-01
The ventral premotor cortex (PMv) has been implicated in the visual guidance of movement. To examine whether neuronal activity in the PMv is involved in controlling the direction of motion of a visual image of the hand or the actual movement of the hand, we trained a monkey to capture a target that was presented on a video display using the same side of its hand as was displayed on the video display. We found that PMv neurons predominantly exhibited premovement activity that reflected the image motion to be controlled, rather than the physical motion of the hand. We also found that the activity of half of such direction-selective PMv neurons depended on which side (left versus right) of the video image of the hand was used to capture the target. Furthermore, this selectivity for a portion of the hand was not affected by changing the starting position of the hand movement. These findings suggest that PMv neurons play a crucial role in determining which part of the body moves in which direction, at least under conditions in which a visual image of a limb is used to guide limb movements.
Eye-Head Coordination in 31 Space Shuttle Astronauts during Visual Target Acquisition.
Reschke, Millard F; Kolev, Ognyan I; Clément, Gilles
2017-10-27
Between 1989 and 1995, NASA evaluated how increases in flight duration of up to 17 days affected the health and performance of Space Shuttle astronauts. Thirty-one Space Shuttle pilots participating in 17 space missions were tested at 3 different times before flight and 3 different times after flight, starting within a few hours of return to Earth. The astronauts moved their head and eyes as quickly as possible from the central fixation point to a specified target located 20°, 30°, or 60° off center. Eye movements were measured with electro-oculography (EOG). Head movements were measured with a triaxial rate sensor system mounted on a headband. The mean time to visually acquire the targets immediately after landing was 7-10% (30-34 ms) slower than mean preflight values, but results returned to baseline after 48 hours. This increase in gaze latency was due to a decrease in velocity and amplitude of both the eye saccade and head movement toward the target. Results were similar after all space missions, regardless of length.
Direct evidence for a position input to the smooth pursuit system.
Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe
2005-07-01
When objects move in our environment, the orientation of the visual axis in space requires the coordination of two types of eye movements: saccades and smooth pursuit. The principal input to the saccadic system is position error, whereas it is velocity error for the smooth pursuit system. Recently, it has been shown that catch-up saccades to moving targets are triggered and programmed by using velocity error in addition to position error. Here, we show that, when a visual target is flashed during ongoing smooth pursuit, it evokes a smooth eye movement toward the flash. The velocity of this evoked smooth movement is proportional to the position error of the flash; it is neither influenced by the velocity of the ongoing smooth pursuit eye movement nor by the occurrence of a saccade, but the effect is absent if the flash is ignored by the subject. Furthermore, the response started around 85 ms after the flash presentation and decayed with an average time constant of 276 ms. Thus this is the first direct evidence of a position input to the smooth pursuit system. This study shows further evidence for a coupling between saccadic and smooth pursuit systems. It also suggests that there is an interaction between position and velocity error signals in the control of more complex movements.
3D Visual Tracking of an Articulated Robot in Precision Automated Tasks
Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.
2017-01-01
The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860
Receptive fields for smooth pursuit eye movements and motion perception.
Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R
2010-12-01
Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.
Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald
2017-12-15
The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some individuals showed BEAM benefits relative to KEMAR. Under dynamic conditions, BEAM and BEAMAR performance dropped significantly immediately following a target location transition. However, performance recovered by the second word in the sequence and was sustained until the next transition. When performance was assessed using an auditory-visual word congruence task, the benefits of beamforming reported previously were generally preserved under dynamic conditions in which the target source could move unpredictably from one location to another (i.e., performance recovered rapidly following source transitions) while the observer steered the beamforming via eye gaze, for both young NH and young HI groups.
Effect of Target Location on Dynamic Visual Acuity During Passive Horizontal Rotation
NASA Technical Reports Server (NTRS)
Appelbaum, Meghan; DeDios, Yiri; Kulecz, Walter; Peters, Brian; Wood, Scott
2010-01-01
The vestibulo-ocular reflex (VOR) generates eye rotation to compensate for potential retinal slip in the specific plane of head movement. Dynamic visual acuity (DVA) has been utilized as a functional measure of the VOR. The purpose of this study was to examine changes in accuracy and reaction time when performing a DVA task with targets offset from the plane of rotation, e.g. offset vertically during horizontal rotation. Visual acuity was measured in 12 healthy subjects as they moved a hand-held joystick to indicate the orientation of a computer-generated Landolt C "as quickly and accurately as possible." Acuity thresholds were established with optotypes presented centrally on a wall-mounted LCD screen at 1.3 m distance, first without motion (static condition) and then while oscillating at 0.8 Hz (DVA, peak velocity 60 deg/s). The effect of target location was then measured during horizontal rotation with the optotypes randomly presented in one of nine different locations on the screen (offset up to 10 deg). The optotype size (logMar 0, 0.2 or 0.4, corresponding to Snellen range 20/20 to 20/50) and presentation duration (150, 300 and 450 ms) were counter-balanced across five trials, each utilizing horizontal rotation at 0.8 Hz. Dynamic acuity was reduced relative to static acuity in 7 of 12 subjects by one step size. During the random target trials, both accuracy and reaction time improved proportional to optotype size. Accuracy and reaction time also improved between 150 ms and 300 ms presentation durations. The main finding was that both accuracy and reaction time varied as a function of target location, with greater performance decrements when acquiring vertical targets. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of motion. Both reaction time and accuracy are functionally relevant DVA parameters of VOR function.
Pavan, Andrea; Boyce, Matthew; Ghin, Filippo
2016-10-01
Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.
Predicting the 'where' and resolving the 'what' of a moving target: a dichotomy of abilities.
Long, G M; Vogel, C A
1998-01-01
Anticipation timing (AT) and dynamic visual acuity (DVA) were assessed in a group of college students (n = 60) under a range of velocity and duration conditions. Subjects participated in two identical sessions 1 week apart. Consistently with previous work, DVA performance worsened as velocity increased and as target duration decreased; and there was a significant improvement from the first to the second session. In contrast, AT performance improved as velocity increased, whereas no improvement from the first to the second session was indicated; but increasing duration again benefited performance. Correlational analyses comparing DVA and AT did not reveal any systematic relationship between the two visual tasks. A follow-up study with different instructions on the AT task revealed the same pattern of AT performance, suggesting the generalizability of the obtained stimulus relationships for the AT task. The importance of the often-overlooked role of stimulus variables on the AT task is discussed.
Realization of the ergonomics design and automatic control of the fundus cameras
NASA Astrophysics Data System (ADS)
Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye
2012-12-01
The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.
Robotic astrobiology - prospects for enhancing scientific productivity of mars rover missions
NASA Astrophysics Data System (ADS)
Ellery, A. A.
2018-07-01
Robotic astrobiology involves the remote projection of intelligent capabilities to planetary missions in the search for life, preferably with human-level intelligence. Planetary rovers would be true human surrogates capable of sophisticated decision-making to enhance their scientific productivity. We explore several key aspects of this capability: (i) visual texture analysis of rocks to enable their geological classification and so, astrobiological potential; (ii) serendipitous target acquisition whilst on the move; (iii) continuous extraction of regolith properties, including water ice whilst on the move; and (iv) deep learning-capable Bayesian net expert systems. Individually, these capabilities will provide enhanced scientific return for astrobiology missions, but together, they will provide full autonomous science capability.
Bomber: The Formation and Early Years of Strategic Air Command
2012-11-01
USSTAF US Strategic Air Forces VDT variable discharge turbine (engine) VFR visual flight rules VHB very heavy bomber VJ-Day Victory over Japan Day WSEG...projectile from high altitude, from a moving and unstable platform, in strong and unpredictable winds , against ground targets four to five miles below...aircraft that could carry the requisite bomb load over long distances and do so without incur- ring prohibitive losses to enemy defenses were not yet
Annotated Bibliography of Reports: Supplement No. 7, 1 July 1974 - 30 June 1975,
1975-06-30
studies have shown that alcohol interferes with visual control of vestibular nystagmus . The present study was designed to assess three partially inde...suppression of vestibular nystagmus ; a second involved smooth oculomotor tracking of a moving target; and a third required repetitive rapid voluntary shifts in... gaze . Oculomotor control was degraded on the first two tasks with recovery toward the initial performance level 4 hours after drinking. Performance on
Zariwala, Hatim A.; Madisen, Linda; Ahrens, Kurt F.; Bernard, Amy; Lein, Edward S.; Jones, Allan R.; Zeng, Hongkui
2011-01-01
The putative excitatory and inhibitory cell classes within the mouse primary visual cortex V1 have different functional properties as studied using recording microelectrode. Excitatory neurons show high selectivity for the orientation angle of moving gratings while the putative inhibitory neurons show poor selectivity. However, the study of selectivity of the genetically identified interneurons and their subtypes remain controversial. Here we use novel Cre-driver and reporter mice to identify genetic subpopulations in vivo for two-photon calcium dye imaging: Wfs1(+)/Gad1(−) mice that labels layer 2/3 excitatory cell population and Pvalb(+)/Gad1(+) mice that labels a genetic subpopulation of inhibitory neurons. The cells in both mice were identically labeled with a tdTomato protein, visible in vivo, using a Cre-reporter line. We found that the Wfs1(+) cells exhibited visual tuning properties comparable to the excitatory population, i.e., high selectivity and tuning to the angle, direction, and spatial frequency of oriented moving gratings. The functional tuning of Pvalb(+) neurons was consistent with previously reported narrow-spiking interneurons in microelectrode studies, exhibiting poorer selectivity than the excitatory neurons. This study demonstrates the utility of Cre-transgenic mouse technology in selective targeting of subpopulations of neurons and makes them amenable to structural, functional, and connectivity studies. PMID:21283555
Dokka, Kalpana; DeAngelis, Gregory C.
2015-01-01
Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214
Kerzel, Dirk
2003-05-01
Observers' judgments of the final position of a moving target are typically shifted in the direction of implied motion ("representational momentum"). The role of attention is unclear: visual attention may be necessary to maintain or halt target displacement. When attention was captured by irrelevant distractors presented during the retention interval, forward displacement after implied target motion disappeared, suggesting that attention may be necessary to maintain mental extrapolation of target motion. In a further corroborative experiment, the deployment of attention was measured after a sequence of implied motion, and faster responses were observed to stimuli appearing in the direction of motion. Thus, attention may guide the mental extrapolation of target motion. Additionally, eye movements were measured during stimulus presentation and retention interval. The results showed that forward displacement with implied motion does not depend on eye movements. Differences between implied and smooth motion are discussed with respect to recent neurophysiological findings.
Meghdadi, Amir H; Irani, Pourang
2013-12-01
We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems.
Bremmer, Frank; Kaminiarz, Andre; Klingenhoefer, Steffen; Churan, Jan
2016-01-01
Primates perform saccadic eye movements in order to bring the image of an interesting target onto the fovea. Compared to stationary targets, saccades toward moving targets are computationally more demanding since the oculomotor system must use speed and direction information about the target as well as knowledge about its own processing latency to program an adequate, predictive saccade vector. In monkeys, different brain regions have been implicated in the control of voluntary saccades, among them the lateral intraparietal area (LIP). Here we asked, if activity in area LIP reflects the distance between fovea and saccade target, or the amplitude of an upcoming saccade, or both. We recorded single unit activity in area LIP of two macaque monkeys. First, we determined for each neuron its preferred saccade direction. Then, monkeys performed visually guided saccades along the preferred direction toward either stationary or moving targets in pseudo-randomized order. LIP population activity allowed to decode both, the distance between fovea and saccade target as well as the size of an upcoming saccade. Previous work has shown comparable results for saccade direction (Graf and Andersen, 2014a,b). Hence, LIP population activity allows to predict any two-dimensional saccade vector. Functional equivalents of macaque area LIP have been identified in humans. Accordingly, our results provide further support for the concept of activity from area LIP as neural basis for the control of an oculomotor brain-machine interface. PMID:27630547
Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon
2014-11-01
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.
Effects of reward on the accuracy and dynamics of smooth pursuit eye movements.
Brielmann, Aenne A; Spering, Miriam
2015-08-01
Reward modulates behavioral choices and biases goal-oriented behavior, such as eye or hand movements, toward locations or stimuli associated with higher rewards. We investigated reward effects on the accuracy and timing of smooth pursuit eye movements in 4 experiments. Eye movements were recorded in participants tracking a moving visual target on a computer monitor. Before target motion onset, a monetary reward cue indicated whether participants could earn money by tracking accurately, or whether the trial was unrewarded (Experiments 1 and 2, n = 11 each). Reward significantly improved eye-movement accuracy across different levels of task difficulty. Improvements were seen even in the earliest phase of the eye movement, within 70 ms of tracking onset, indicating that reward impacts visual-motor processing at an early level. We obtained similar findings when reward was not precued but explicitly associated with the pursuit target (Experiment 3, n = 16); critically, these results were not driven by stimulus prevalence or other factors such as preparation or motivation. Numerical cues (Experiment 4, n = 9) were not effective. (c) 2015 APA, all rights reserved).
High-performance object tracking and fixation with an online neural estimator.
Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian
2007-02-01
Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.
Walk this way: approaching bodies can influence the processing of faces.
Pilz, Karin S; Vuong, Quoc C; Bülthoff, Heinrich H; Thornton, Ian M
2011-01-01
A highly familiar type of movement occurs whenever a person walks towards you. In the present study, we investigated whether this type of motion has an effect on face processing. We took a range of different 3D head models and placed them on a single, identical 3D body model. The resulting figures were animated to approach the observer. In a first series of experiments, we used a sequential matching task to investigate how the motion of an approaching person affects immediate responses to faces. We compared observers' responses following approach sequences to their performance with figures walking backwards (receding motion) or remaining still. Observers were significantly faster in responding to a target face that followed an approach sequence, compared to both receding and static primes. In a second series of experiments, we investigated long-term effects of motion using a delayed visual search paradigm. After studying moving or static avatars, observers searched for target faces in static arrays of varying set sizes. Again, observers were faster at responding to faces that had been learned in the context of an approach sequence. Together these results suggest that the context of a moving body influences face processing, and support the hypothesis that our visual system has mechanisms that aid the encoding of behaviourally-relevant and familiar dynamic events. Copyright © 2010 Elsevier B.V. All rights reserved.
De Sá Teixeira, Nuno Alexandre
2016-09-01
The memory for the final position of a moving object which suddenly disappears has been found to be displaced forward, in the direction of motion, and downwards, in the direction of gravity. These phenomena were coined, respectively, Representational Momentum and Representational Gravity. Although both these and similar effects have been systematically linked with the functioning of internal representations of physical variables (e.g. momentum and gravity), serious doubts have been raised for a cognitively based interpretation, favouring instead a major role of oculomotor and perceptual factors which, more often than not, were left uncontrolled and even ignored. The present work aims to determine the degree to which Representational Momentum and Representational Gravity are epiphenomenal to smooth pursuit eye movements. Observers were required to indicate the offset locations of targets moving along systematically varied directions after a variable imposed retention interval. Each participant completed the task twice, varying the eye movements' instructions: gaze was either constrained or left free to track the targets. A Fourier decomposition analysis of the localization responses was used to disentangle both phenomena. The results show unambiguously that constraining eye movements significantly eliminates the harmonic components which index Representational Momentum, but have no effect on Representational Gravity or its time course. The found outcomes offer promising prospects for the study of the visual representation of gravity and its neurological substrates.
A vision fusion treatment system based on ATtiny26L
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang
2006-11-01
Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.
The Initiation of Smooth Pursuit is Delayed in Anisometropic Amblyopia.
Raashid, Rana Arham; Liu, Ivy Ziqian; Blakeman, Alan; Goltz, Herbert C; Wong, Agnes M F
2016-04-01
Several behavioral studies have shown that the reaction times of visually guided movements are slower in people with amblyopia, particularly during amblyopic eye viewing. Here, we tested the hypothesis that the initiation of smooth pursuit eye movements, which are responsible for accurately keeping moving objects on the fovea, is delayed in people with anisometropic amblyopia. Eleven participants with anisometropic amblyopia and 14 visually normal observers were asked to track a step-ramp target moving at ±15°/s horizontally as quickly and as accurately as possible. The experiment was conducted under three viewing conditions: amblyopic/nondominant eye, binocular, and fellow/dominant eye viewing. Outcome measures were smooth pursuit latency, open-loop gain, steady state gain, and catch-up saccade frequency. Participants with anisometropic amblyopia initiated smooth pursuit significantly slower during amblyopic eye viewing (206 ± 20 ms) than visually normal observers viewing with their nondominant eye (183 ± 17 ms, P = 0.002). However, mean pursuit latency in the anisometropic amblyopia group during binocular and monocular fellow eye viewing was comparable to the visually normal group. Mean open-loop gain, steady state gain, and catch-up saccade frequency were similar between the two groups, but participants with anisometropic amblyopia exhibited more variable steady state gain (P = 0.045). This study provides evidence of temporally delayed smooth pursuit initiation in anisometropic amblyopia. After initiation, the smooth pursuit velocity profile in anisometropic amblyopia participants is similar to visually normal controls. This finding differs from what has been observed previously in participants with strabismic amblyopia who exhibit reduced smooth pursuit velocity gains with more catch-up saccades.
Visual strategies underpinning the development of visual-motor expertise when hitting a ball.
Sarpeshkar, Vishnu; Abernethy, Bruce; Mann, David L
2017-10-01
It is well known that skilled batters in fast-ball sports do not align their gaze with the ball throughout ball-flight, but instead adopt a unique sequence of eye and head movements that contribute toward their skill. However, much of what we know about visual-motor behavior in hitting is based on studies that have employed case study designs, and/or used simplified tasks that fall short of replicating the spatiotemporal demands experienced in the natural environment. The aim of this study was to provide a comprehensive examination of the eye and head movement strategies that underpin the development of visual-motor expertise when intercepting a fast-moving target. Eye and head movements were examined in situ for 4 groups of cricket batters, who were crossed for playing level (elite or club) and age (U19 or adult), when hitting balls that followed either straight or curving ('swinging') trajectories. The results provide support for some widely cited markers of expertise in batting, while questioning the legitimacy of others. Swinging trajectories alter the visual-motor behavior of all batters, though in large part because of the uncertainty generated by the possibility of a variation in trajectory rather than any actual change in trajectory per se. Moreover, curving trajectories influence visual-motor behavior in a nonlinear fashion, with targets that curve away from the observer influencing behavior more than those that curve inward. The findings provide a more comprehensive understanding of the development of visual-motor expertise in interception. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour.
Liu, Bao-Hua; Huberman, Andrew D; Scanziani, Massimo
2016-10-20
The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei, cortical lesions have suggested that the visual cortex might also be involved. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function, to plastically adapt the execution of innate motor behaviours.
A new measure for the assessment of visual awareness in individuals with tunnel vision.
AlSaqr, Ali M; Dickinson, Chris M
2017-01-01
Individuals with a restricted peripheral visual field or tunnel vision (TV) have problems moving about and avoiding obstacles. Some individuals adapt better than others and some use assistive optical aids, so measurement of the visual field is not sufficient to describe their performance. In the present study, we developed a new clinical test called the 'Assessment of Visual Awareness (AVA)', which can be used to measure detection of peripheral targets. The participants were 20 patients with TV due to retinitis pigmentosa (PTV) and 50 normally sighted participants with simulated tunnel vision (STV) using goggles. In the AVA test, detection times were measured, when subjects searched for 24 individually presented, one degree targets, randomly positioned in a 60 degrees noise background. Head and eye movements were allowed and the presentation time was unlimited. The test validity was investigated by correlating the detection times with the 'percentage of preferred walking speed' (PPWS) and the 'number of collisions' on an indoor mobility course. In PTV and STV, the detection times had significant negative correlation with the field of view. The detection times had significant positive relations with target location. In the STV, the detection time was significantly negatively correlated with the PPWS and significantly positively correlated with the collisions score on the indoor mobility course. In the PTV, the relationship was not statistically significant. No significant difference in performance of STV was found when repeating the test one to two weeks later. The proposed AVA test was sensitive to the field of view and target location. The test is unique in design, quick, simple to deliver and both repeatable and valid. It could be a valuable tool to test different rehabilitation strategies in patients with TV. © 2016 Optometry Australia.
Burton, Brian G; Laughlin, Simon B
2003-11-01
Male houseflies use a sex-specific frontal eye region, the lovespot, to detect and pursue mates. We recorded the electrical responses of photoreceptors to optical stimuli that simulate the signals received by a male or female photoreceptor as a conspecific passes through its field of view. We analysed the ability of male and female frontal photoreceptors to code conspecifics over the range of speeds and distances encountered during pursuit, and reconstructed the neural images of these targets in photoreceptor arrays. A male's lovespot photoreceptor detects a conspecific at twice the distance of a female photoreceptor, largely through better optics. This detection distance greatly exceeds those reported in previous behavioural studies. Lovespot photoreceptors respond more strongly than female photoreceptors to targets tracked during pursuit, with amplitudes reaching 25 mV. The male photoreceptor also has a faster response, exhibits a unique preference for stimuli of 20-30 ms duration that selects for conspecifics and deblurs moving images with response transients. White-noise analysis substantially underestimates these improvements. We conclude that in the lovespot, both optics and phototransduction are specialised to enhance and deblur the neural images of moving targets, and propose that analogous mechanisms may sharpen the neural image still further as it is transferred to visual interneurones.
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M
2016-02-01
In "hybrid search" tasks, such as finding items on a grocery list, one must search the scene for targets while also searching the list in memory. How is the representation of a visual item compared with the representations of items in the memory set? Predominant theories would propose a role for visual working memory (VWM) either as the site of the comparison or as a conduit between visual and memory systems. In seven experiments, we loaded VWM in different ways and found little or no effect on hybrid search performance. However, the presence of a hybrid search task did reduce the measured capacity of VWM by a constant amount regardless of the size of the memory or visual sets. These data are broadly consistent with an account in which VWM must dedicate a fixed amount of its capacity to passing visual representations to long-term memory for comparison to the items in the memory set. The data cast doubt on models in which the search template resides in VWM or where memory set item representations are moved from LTM through VWM to earlier areas for comparison to visual items.
Graci, Valentina
2011-10-01
It has been previously suggested that coupled upper and limb movements need visuomotor coordination to be achieved. Previous studies have not investigated the role that visual cues may play in the coordination of locomotion and prehension. The aim of this study was to investigate if lower peripheral visual cues provide online control of the coordination of locomotion and prehension as they have been showed to do during adaptive gait and level walking. Twelve subjects reached a semi-empty or a full glass with their dominant or non-dominant hand at gait termination. Two binocular visual conditions were investigated: normal vision and lower visual occlusion. Outcome measures were determined using 3D motion capture techniques. Results showed that although the subjects were able to successfully complete the task without spilling the water from the glass under lower visual occlusion, they increased the margin of safety between final foot placements and glass. These findings suggest that lower visual cues are mainly used online to fine tune the trajectory of the upper and lower limbs moving toward the target. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.
2014-11-01
The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.
Hybrid value foraging: How the value of targets shapes human foraging behavior.
Wolfe, Jeremy M; Cain, Matthew S; Alaoui-Soce, Abla
2018-04-01
In hybrid foraging, observers search visual displays for multiple instances of multiple target types. In previous hybrid foraging experiments, although there were multiple types of target, all instances of all targets had the same value. Under such conditions, behavior was well described by the marginal value theorem (MVT). Foragers left the current "patch" for the next patch when the instantaneous rate of collection dropped below their average rate of collection. An observer's specific target selections were shaped by previous target selections. Observers were biased toward picking another instance of the same target. In the present work, observers forage for instances of four target types whose value and prevalence can vary. If value is kept constant and prevalence manipulated, participants consistently show a preference for the most common targets. Patch-leaving behavior follows MVT. When value is manipulated, observers favor more valuable targets, though individual foraging strategies become more diverse, with some observers favoring the most valuable target types very strongly, sometimes moving to the next patch without collecting any of the less valuable targets.
Fractional-order information in the visual control of lateral locomotor interception.
Bootsma, Reinoud J; Ledouit, Simon; Casanova, Remy; Zaal, Frank T J M
2016-04-01
Previous work on locomotor interception of a target moving in the transverse plane has suggested that interception is achieved by maintaining the target's bearing angle (often inadvertently confused and/or confounded with the target heading angle) at a constant value. However, dynamics-based model simulations testing the veracity of the underlying control strategy of nulling the rate of change in the bearing angle have been restricted to limited conditions of target motion, and only a few alternatives have been considered. Exploring a wide range of target motion characteristics with straight and curving ball trajectories in a virtual reality setting, we examined how soccer goalkeepers moved along the goal line to intercept long-range shots on goal, a situation in which interception is naturally constrained to movement along a single dimension. Analyses of the movement patterns suggested reliance on combinations of optical position and velocity for straight trajectories and optical velocity and acceleration for curving trajectories. As an alternative to combining such standard integer-order derivatives, we demonstrate with a simple dynamical model that nulling a single informational variable of a self-tuned fractional (rather than integer) order efficiently captures the timing and patterning of the observed interception behaviors. This new perspective could fundamentally change the conception of what perceptual systems may actually provide, both in humans and in other animals. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Target-responsive DNAzyme cross-linked hydrogel for visual quantitative detection of lead.
Huang, Yishun; Ma, Yanli; Chen, Yahong; Wu, Xuemeng; Fang, Luting; Zhu, Zhi; Yang, Chaoyong James
2014-11-18
Because of the severe health risks associated with lead pollution, rapid, sensitive, and portable detection of low levels of Pb(2+) in biological and environmental samples is of great importance. In this work, a Pb(2+)-responsive hydrogel was prepared using a DNAzyme and its substrate as cross-linker for rapid, sensitive, portable, and quantitative detection of Pb(2+). Gold nanoparticles (AuNPs) were first encapsulated in the hydrogel as an indicator for colorimetric analysis. In the absence of lead, the DNAzyme is inactive, and the substrate cross-linker maintains the hydrogel in the gel form. In contrast, the presence of lead activates the DNAzyme to cleave the substrate, decreasing the cross-linking density of the hydrogel and resulting in dissolution of the hydrogel and release of AuNPs for visual detection. As low as 10 nM Pb(2+) can be detected by the naked eye. Furthermore, to realize quantitative visual detection, a volumetric bar-chart chip (V-chip) was used for quantitative readout of the hydrogel system by replacing AuNPs with gold-platinum core-shell nanoparticles (Au@PtNPs). The Au@PtNPs released from the hydrogel upon target activation can efficiently catalyze the decomposition of H2O2 to generate a large volume of O2. The gas pressure moves an ink bar in the V-chip for portable visual quantitative detection of lead with a detection limit less than 5 nM. The device was able to detect lead in digested blood with excellent accuracy. The method developed can be used for portable lead quantitation in many applications. Furthermore, the method can be further extended to portable visual quantitative detection of a variety of targets by replacing the lead-responsive DNAzyme with other DNAzymes.
When up is down in 0g: how gravity sensing affects the timing of interceptive actions.
Senot, Patrice; Zago, Myrka; Le Séac'h, Anne; Zaoui, Mohammed; Berthoz, Alain; Lacquaniti, Francesco; McIntyre, Joseph
2012-02-08
Humans are known to regulate the timing of interceptive actions by modeling, in a simplified way, Newtonian mechanics. Specifically, when intercepting an approaching ball, humans trigger their movements a bit earlier when the target arrives from above than from below. This bias occurs regardless of the ball's true kinetics, and thus appears to reflect an a priori expectation that a downward moving object will accelerate. We postulate that gravito-inertial information is used to tune visuomotor responses to match the target's most likely acceleration. Here we used the peculiar conditions of parabolic flight--where gravity's effects change every 20 s--to test this hypothesis. We found a striking reversal in the timing of interceptive responses performed in weightlessness compared with trials performed on ground, indicating a role of gravity sensing in the tuning of this response. Parallels between these observations and the properties of otolith receptors suggest that vestibular signals themselves might plausibly provide the critical input. Thus, in addition to its acknowledged importance for postural control, gaze stabilization, and spatial navigation, we propose that detecting the direction of gravity's pull plays a role in coordinating quick reactions intended to intercept a fast-moving visual target.
Visual Detection and Tracking System for a Spherical Amphibious Robot
Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun
2017-01-01
With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation. PMID:28420134
Visual Detection and Tracking System for a Spherical Amphibious Robot.
Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun
2017-04-15
With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.
Moving Target Techniques: Cyber Resilience throught Randomization, Diversity, and Dynamism
2017-03-03
Moving Target Techniques: Cyber Resilience through Randomization, Diversity, and Dynamism Hamed Okhravi and Howard Shrobe Overview: The static...nature of computer systems makes them vulnerable to cyber attacks. Consider a situation where an attacker wants to compromise a remote system running... cyber resilience that attempts to rebalance the cyber landscape is known as cyber moving target (MT) (or just moving target) techniques. Moving target
Multisensory Motion Perception in 3–4 Month-Old Infants
Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara
2017-01-01
Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829
Construction and testing of a Scanning Laser Radar (SLR), phase 2
NASA Technical Reports Server (NTRS)
Flom, T.; Coombes, H. D.
1971-01-01
The scanning laser radar overall system is described. Block diagrams and photographs of the hardware are included with the system description. Detailed descriptions of all the subsystems that make up the scanning laser radar system are included. Block diagrams, photographs, and detailed optical and electronic schematics are used to help describe such subsystem hardware as the laser, beam steerer, receiver optics and detector, control and processing electronics, visual data displays, and the equipment used on the target. Tests were performed on the scanning laser radar to determine its acquisition and tracking performance and to determine its range and angle accuracies while tracking a moving target. The tests and test results are described.
Proposals of observations with the space telescope in the domain of astrometry
NASA Astrophysics Data System (ADS)
Fresneau, A.
The use of the Hubble Space Telescope for astrometry is advertised at the same level as for photometry, spectroscopy, or polarimetry. The prime instrument to be used for that goal is one of the three fine guidance sensors. The interferometric design of the stellar sensor is adequate for stellar diameter measurements (>0.01 arcsec) close binaries separation determination (<0.1 arcsec) and differential astrometry on targets in a field of view of 60 square arcmin and in the visual magnitude range from 3 to 18. Moving targets brighter than 14 with an apparent motion slower than 150 arcsec per hour can be tracked at the same level of accuracy.
Visualizing Special Relativity: The Field of An Electric Dipole Moving at Relativistic Speed
ERIC Educational Resources Information Center
Smith, Glenn S.
2011-01-01
The electromagnetic field is determined for a time-varying electric dipole moving with a constant velocity that is parallel to its moment. Graphics are used to visualize this field in the rest frame of the dipole and in the laboratory frame when the dipole is moving at relativistic speed. Various phenomena from special relativity are clearly…
Development of kinesthetic-motor and auditory-motor representations in school-aged children.
Kagerer, Florian A; Clark, Jane E
2015-07-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.
Development of kinesthetic-motor and auditory-motor representations in school-aged children
Clark, Jane E.
2015-01-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age. PMID:25912609
Carlini, Alessandro; Actis-Grosso, Rossana; Stucchi, Natale; Pozzo, Thierry
2012-01-01
Our daily experience shows that the CNS is a highly efficient machine to predict the effect of actions into the future; are we so efficient also in reconstructing the past of an action? Previous studies demonstrated we are more effective in extrapolating the final position of a stimulus moving according to biological kinematic laws. Here we address the complementary question: are we more effective in extrapolating the starting position (SP) of a motion following a biological velocity profile? We presented a dot moving upward and corresponding to vertical arm movements that were masked in the first part of the trajectory. The stimulus could either move according to biological or non-biological kinematic laws of motion. Results show a better efficacy in reconstructing the SP of a natural motion: participants demonstrate to reconstruct coherently only the SP of the biological condition. When the motion violates the biological kinematic law, responses are scattered and show a tendency toward larger errors. Instead, in a control experiment where the full motions were displayed, no-difference between biological and non-biological motions is found. Results are discussed in light of potential mechanisms involved in visual inference. We propose that as soon as the target appears the cortical motor area would generate an internal representation of reaching movement. When the visual input and the stored kinematic template match, the SP is traced back on the basis of this memory template, making more effective the SP reconstruction. PMID:22712012
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
2D/3D Visual Tracker for Rover Mast
NASA Technical Reports Server (NTRS)
Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria
2006-01-01
A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.
Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour
Liu, Bao-hua; Huberman, Andrew D.; Scanziani, Massimo
2017-01-01
The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections1. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood1–4. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system3,5,6, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision5. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life7–11. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei10–13, cortical lesions have suggested that the visual cortex might also be involved9,14,15. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment11,16–18, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function19, to plastically adapt the execution of innate motor behaviours. PMID:27732573
The Initiation of Smooth Pursuit is Delayed in Anisometropic Amblyopia
Raashid, Rana Arham; Liu, Ivy Ziqian; Blakeman, Alan; Goltz, Herbert C.; Wong, Agnes M. F.
2016-01-01
Purpose Several behavioral studies have shown that the reaction times of visually guided movements are slower in people with amblyopia, particularly during amblyopic eye viewing. Here, we tested the hypothesis that the initiation of smooth pursuit eye movements, which are responsible for accurately keeping moving objects on the fovea, is delayed in people with anisometropic amblyopia. Methods Eleven participants with anisometropic amblyopia and 14 visually normal observers were asked to track a step-ramp target moving at ±15°/s horizontally as quickly and as accurately as possible. The experiment was conducted under three viewing conditions: amblyopic/nondominant eye, binocular, and fellow/dominant eye viewing. Outcome measures were smooth pursuit latency, open-loop gain, steady state gain, and catch-up saccade frequency. Results Participants with anisometropic amblyopia initiated smooth pursuit significantly slower during amblyopic eye viewing (206 ± 20 ms) than visually normal observers viewing with their nondominant eye (183 ± 17 ms, P = 0.002). However, mean pursuit latency in the anisometropic amblyopia group during binocular and monocular fellow eye viewing was comparable to the visually normal group. Mean open-loop gain, steady state gain, and catch-up saccade frequency were similar between the two groups, but participants with anisometropic amblyopia exhibited more variable steady state gain (P = 0.045). Conclusions This study provides evidence of temporally delayed smooth pursuit initiation in anisometropic amblyopia. After initiation, the smooth pursuit velocity profile in anisometropic amblyopia participants is similar to visually normal controls. This finding differs from what has been observed previously in participants with strabismic amblyopia who exhibit reduced smooth pursuit velocity gains with more catch-up saccades. PMID:27070109
Miall, R Chris; Kitchen, Nick M; Nam, Se-Ho; Lefumat, Hannah; Renault, Alix G; Ørstavik, Kristin; Cole, Jonathan D; Sarlegna, Fabrice R
2018-05-19
It is uncertain how vision and proprioception contribute to adaptation of voluntary arm movements. In normal participants, adaptation to imposed forces is possible with or without vision, suggesting that proprioception is sufficient; in participants with proprioceptive loss (PL), adaptation is possible with visual feedback, suggesting that proprioception is unnecessary. In experiment 1 adaptation to, and retention of, perturbing forces were evaluated in three chronically deafferented participants. They made rapid reaching movements to move a cursor toward a visual target, and a planar robot arm applied orthogonal velocity-dependent forces. Trial-by-trial error correction was observed in all participants. Such adaptation has been characterized with a dual-rate model: a fast process that learns quickly, but retains poorly and a slow process that learns slowly and retains well. Experiment 2 showed that the PL participants had large individual differences in learning and retention rates compared to normal controls. Experiment 3 tested participants' perception of applied forces. With visual feedback, the PL participants could report the perturbation's direction as well as controls; without visual feedback, thresholds were elevated. Experiment 4 showed, in healthy participants, that force direction could be estimated from head motion, at levels close to the no-vision threshold for the PL participants. Our results show that proprioceptive loss influences perception, motor control and adaptation but that proprioception from the moving limb is not essential for adaptation to, or detection of, force fields. The differences in learning and retention seen between the three deafferented participants suggest that they achieve these tasks in idiosyncratic ways after proprioceptive loss, possibly integrating visual and vestibular information with individual cognitive strategies.
Motor effects from visually induced disorientation in man.
DOT National Transportation Integrated Search
1969-11-01
The problem of disorientation in a moving optical environment was examined. Egocentric disorientation can be experienced by a pilot if the entire visual environment moves relative to his body without a clue of the objective position of the airplane i...
Ahrens, Merle-Marie; Veniero, Domenica; Gross, Joachim; Harvey, Monika; Thut, Gregor
2015-01-01
Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech. PMID:26623650
Contribution of the cerebellar flocculus to gaze control during active head movements
NASA Technical Reports Server (NTRS)
Belton, T.; McCrea, R. A.; Peterson, B. W. (Principal Investigator)
1999-01-01
The flocculus and ventral paraflocculus are adjacent regions of the cerebellar cortex that are essential for controlling smooth pursuit eye movements and for altering the performance of the vestibulo-ocular reflex (VOR). The question addressed in this study is whether these regions of the cerebellum are more globally involved in controlling gaze, regardless of whether eye or active head movements are used to pursue moving visual targets. Single-unit recordings were obtained from Purkinje (Pk) cells in the floccular region of squirrel monkeys that were trained to fixate and pursue small visual targets. Cell firing rate was recorded during smooth pursuit eye movements, cancellation of the VOR, combined eye-head pursuit, and spontaneous gaze shifts in the absence of targets. Pk cells were found to be much less sensitive to gaze velocity during combined eye-head pursuit than during ocular pursuit. They were not sensitive to gaze or head velocity during gaze saccades. Temporary inactivation of the floccular region by muscimol injection compromised ocular pursuit but had little effect on the ability of monkeys to pursue visual targets with head movements or to cancel the VOR during active head movements. Thus the signals produced by Pk cells in the floccular region are necessary for controlling smooth pursuit eye movements but not for coordinating gaze during active head movements. The results imply that individual functional modules in the cerebellar cortex are less involved in the global organization and coordination of movements than with parametric control of movements produced by a specific part of the body.
Eye tracking a self-moved target with complex hand-target dynamics
Landelle, Caroline; Montagnini, Anna; Madelain, Laurent
2016-01-01
Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring mapping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ∼5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. PMID:27466129
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
Revisiting Huey: on the importance of the upper part of words during reading.
Perea, Manuel
2012-12-01
Recent research has shown that that the upper part of words enjoys an advantage over the lower part of words in the recognition of isolated words. The goal of the present article was to examine how removing the upper/lower part of the words influences eye movement control during silent normal reading. The participants' eye movements were monitored when reading intact sentences and when reading sentences in which the upper or the lower portion of the text was deleted. Results showed a greater reading cost (longer fixations) when the upper part of the text was removed than when the lower part of the text was removed (i.e., it influenced when to move the eyes). However, there was little influence on the initial landing position on a target word (i.e., on the decision as to where to move the eyes). In addition, lexical-processing difficulty (as inferred from the magnitude of the word frequency effect on a target word) was affected by text degradation. The implications of these findings for models of visual-word recognition and reading are discussed.
Sensitivity of vergence responses of 5- to 10-week-old human infants
Seemiller, Eric S.; Wang, Jingyun; Candy, T. Rowan
2016-01-01
Infants have been shown to make vergence eye movements by 1 month of age to stimulation with prisms or targets moving in depth. However, little is currently understood about the threshold sensitivity of the maturing visual system to such stimulation. In this study, 5- to 10-week-old human infants and adults viewed a target moving in depth as a triangle wave of three amplitudes (1.0, 0.5, and 0.25 meter angles). Their horizontal eye position and the refractive state of both eyes were measured simultaneously. The vergence responses of the infants and adults varied at the same frequency as the stimulus at the three tested modulation amplitudes. For a typical infant of this age, the smallest amplitude is equivalent to an interocular change of approximately 2° of retinal disparity, from nearest to farthest points. The infants' accommodation responses only modulated reliably to the largest stimulus, while adults responded to all three amplitudes. Although the accommodative system appears relatively insensitive, the sensitivity of the vergence responses suggests that subtle cues are available to drive vergence in the second month after birth. PMID:26891827
Moving target parameter estimation of SAR after two looks cancellation
NASA Astrophysics Data System (ADS)
Gan, Rongbing; Wang, Jianguo; Gao, Xiang
2005-11-01
Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.
Reitsamer, H; Groiss, H P; Franz, M; Pflug, R
2000-01-31
We present a computer-guided microelectrode positioning system that is routinely used in our laboratory for intracellular electrophysiology and functional staining of retinal neurons. Wholemount preparations of isolated retina are kept in a superfusion chamber on the stage of an inverted microscope. Cells and layers of the retina are visualized by Nomarski interference contrast using infrared light in combination with a CCD camera system. After five-point calibration has been performed the electrode can be guided to any point inside the calibrated volume without moving the retina. Electrode deviations from target cells can be corrected by the software further improving the precision of this system. The good visibility of cells avoids prelabeling with fluorescent dyes and makes it possible to work under completely dark adapted conditions.
Robotic Automation of In Vivo Two-Photon Targeted Whole-Cell Patch-Clamp Electrophysiology.
Annecchino, Luca A; Morris, Alexander R; Copeland, Caroline S; Agabi, Oshiorenoya E; Chadderton, Paul; Schultz, Simon R
2017-08-30
Whole-cell patch-clamp electrophysiological recording is a powerful technique for studying cellular function. While in vivo patch-clamp recording has recently benefited from automation, it is normally performed "blind," meaning that throughput for sampling some genetically or morphologically defined cell types is unacceptably low. One solution to this problem is to use two-photon microscopy to target fluorescently labeled neurons. Combining this with robotic automation is difficult, however, as micropipette penetration induces tissue deformation, moving target cells from their initial location. Here we describe a platform for automated two-photon targeted patch-clamp recording, which solves this problem by making use of a closed loop visual servo algorithm. Our system keeps the target cell in focus while iteratively adjusting the pipette approach trajectory to compensate for tissue motion. We demonstrate platform validation with patch-clamp recordings from a variety of cells in the mouse neocortex and cerebellum. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Influence of moving visual environment on sit-to-stand kinematics in children and adults.
Slaboda, Jill C; Barton, Joseph E; Keshner, Emily A
2009-08-01
The effect of visual field motion on the sit-to-stand kinematics of adults and children was investigated. Children (8 to12 years of age) and adults (21 to 49 years of age) were seated in a virtual environment that rotated in the pitch and roll directions. Participants stood up either (1) concurrent with onset of visual motion or (2) after an immersion period in the moving visual environment, and (3) without visual input. Angular velocities of the head with respect to the trunk, and trunk with respect to the environment, w ere calculated as was head andtrunk center of mass. Both adults and children reduced head and trunk angular velocity after immersion in the moving visual environment. Unlike adults, children demonstrated significant differences in displacement of the head center of mass during the immersion and concurrent trials when compared to trials without visual input. Results suggest a time-dependent effect of vision on sit-to-stand kinematics in adults, whereas children are influenced by the immediate presence or absence of vision.
ERIC Educational Resources Information Center
Smorenburg, Ana R. P.; Ledebt, Annick; Deconinck, Frederik J. A.; Savelsbergh, Geert J. P.
2011-01-01
This study examined the active joint-position sense in children with Spastic Hemiparetic Cerebral Palsy (SHCP) and the effect of static visual feedback and static mirror visual feedback, of the non-moving limb, on the joint-position sense. Participants were asked to match the position of one upper limb with that of the contralateral limb. The task…
NASA Technical Reports Server (NTRS)
Liao, Min-Ju; Johnson, Walter W.
2004-01-01
The present study investigated the effects of droplines on target acquisition performance on a 3-D perspective display in which participants were required to move a cursor into a target cube as quickly as possible. Participants' performance and coordination strategies were characterized using both Fitts' law and acquisition patterns of the 3 viewer-centered target display dimensions (azimuth, elevation, and range). Participants' movement trajectories were recorded and used to determine movement times for acquisitions of the entire target and of each of its display dimensions. The goodness of fit of the data to a modified Fitts function varied widely among participants, and the presence of droplines did not have observable impacts on the goodness of fit. However, droplines helped participants navigate via straighter paths and particularly benefited range dimension acquisition. A general preference for visually overlapping the target with the cursor prior to capturing the target was found. Potential applications of this research include the design of interactive 3-D perspective displays in which fast and accurate selection and manipulation of content residing at multiple ranges may be a challenge.
Neural Extrapolation of Motion for a Ball Rolling Down an Inclined Plane
La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka
2014-01-01
It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion. PMID:24940874
Neural extrapolation of motion for a ball rolling down an inclined plane.
La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka
2014-01-01
It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.
Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2016-05-01
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.
Weighted feature selection criteria for visual servoing of a telerobot
NASA Technical Reports Server (NTRS)
Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.
1989-01-01
Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Detection of Moving Targets Using Soliton Resonance Effect
NASA Technical Reports Server (NTRS)
Kulikov, Igor K.; Zak, Michail
2013-01-01
The objective of this research was to develop a fundamentally new method for detecting hidden moving targets within noisy and cluttered data-streams using a novel "soliton resonance" effect in nonlinear dynamical systems. The technique uses an inhomogeneous Korteweg de Vries (KdV) equation containing moving-target information. Solution of the KdV equation will describe a soliton propagating with the same kinematic characteristics as the target. The approach uses the time-dependent data stream obtained with a sensor in form of the "forcing function," which is incorporated in an inhomogeneous KdV equation. When a hidden moving target (which in many ways resembles a soliton) encounters the natural "probe" soliton solution of the KdV equation, a strong resonance phenomenon results that makes the location and motion of the target apparent. Soliton resonance method will amplify the moving target signal, suppressing the noise. The method will be a very effective tool for locating and identifying diverse, highly dynamic targets with ill-defined characteristics in a noisy environment. The soliton resonance method for the detection of moving targets was developed in one and two dimensions. Computer simulations proved that the method could be used for detection of singe point-like targets moving with constant velocities and accelerations in 1D and along straight lines or curved trajectories in 2D. The method also allows estimation of the kinematic characteristics of moving targets, and reconstruction of target trajectories in 2D. The method could be very effective for target detection in the presence of clutter and for the case of target obscurations.
When hawks attack: animal-borne video studies of goshawk pursuit and prey-evasion strategies
Kane, Suzanne Amador; Fulton, Andrew H.; Rosenthal, Lee J.
2015-01-01
Video filmed by a camera mounted on the head of a Northern Goshawk (Accipiter gentilis) was used to study how the raptor used visual guidance to pursue prey and land on perches. A combination of novel image analysis methods and numerical simulations of mathematical pursuit models was used to determine the goshawk's pursuit strategy. The goshawk flew to intercept targets by fixing the prey at a constant visual angle, using classical pursuit for stationary prey, lures or perches, and usually using constant absolute target direction (CATD) for moving prey. Visual fixation was better maintained along the horizontal than vertical direction. In some cases, we observed oscillations in the visual fix on the prey, suggesting that the goshawk used finite-feedback steering. Video filmed from the ground gave similar results. In most cases, it showed goshawks intercepting prey using a trajectory consistent with CATD, then turning rapidly to attack by classical pursuit; in a few cases, it showed them using curving non-CATD trajectories. Analysis of the prey's evasive tactics indicated that only sharp sideways turns caused the goshawk to lose visual fixation on the prey, supporting a sensory basis for the surprising frequency and effectiveness of this tactic found by previous studies. The dynamics of the prey's looming image also suggested that the goshawk used a tau-based interception strategy. We interpret these results in the context of a concise review of pursuit–evasion in biology, and conjecture that some prey deimatic ‘startle’ displays may exploit tau-based interception. PMID:25609783
Effects of continuous visual feedback during sitting balance training in chronic stroke survivors.
Pellegrino, Laura; Giannoni, Psiche; Marinelli, Lucio; Casadio, Maura
2017-10-16
Postural control deficits are common in stroke survivors and often the rehabilitation programs include balance training based on visual feedback to improve the control of body position or of the voluntary shift of body weight in space. In the present work, a group of chronic stroke survivors, while sitting on a force plate, exercised the ability to control their Center of Pressure with a training based on continuous visual feedback. The goal of this study was to test if and to what extent chronic stroke survivors were able to learn the task and transfer the learned ability to a condition without visual feedback and to directions and displacement amplitudes different from those experienced during training. Eleven chronic stroke survivors (5 Male - 6 Female, age: 59.72 ± 12.84 years) participated in this study. Subjects were seated on a stool positioned on top of a custom-built force platform. Their Center of Pressure positions were mapped to the coordinate of a cursor on a computer monitor. During training, the cursor position was always displayed and the subjects were to reach targets by shifting their Center of Pressure by moving their trunk. Pre and post-training subjects were required to reach without visual feedback of the cursor the training targets as well as other targets positioned in different directions and displacement amplitudes. During training, most stroke survivors were able to perform the required task and to improve their performance in terms of duration, smoothness, and movement extent, although not in terms of movement direction. However, when we removed the visual feedback, most of them had no improvement with respect to their pre-training performance. This study suggests that postural training based exclusively on continuous visual feedback can provide limited benefits for stroke survivors, if administered alone. However, the positive gains observed during training justify the integration of this technology-based protocol in a well-structured and personalized physiotherapy training, where the combination of the two approaches may lead to functional recovery.
Discrimination of curvature from motion during smooth pursuit eye movements and fixation.
Ross, Nicholas M; Goettker, Alexander; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R
2017-09-01
Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination. NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature. Copyright © 2017 the American Physiological Society.
Velocity and Structure Estimation of a Moving Object Using a Moving Monocular Camera
2006-01-01
map the Euclidean position of static landmarks or visual features in the environment . Recent applications of this technique include aerial...From Motion in a Piecewise Planar Environment ,” International Journal of Pattern Recognition and Artificial Intelligence, Vol. 2, No. 3, pp. 485-508...1988. [9] J. M. Ferryman, S. J. Maybank , and A. D. Worrall, “Visual Surveil- lance for Moving Vehicles,” Intl. Journal of Computer Vision, Vol. 37, No
NASA Astrophysics Data System (ADS)
Wilson, John J.; Palaniappan, Ramaswamy
2011-04-01
The steady state visual evoked protocol has recently become a popular paradigm in brain-computer interface (BCI) applications. Typically (regardless of function) these applications offer the user a binary selection of targets that perform correspondingly discrete actions. Such discrete control systems are appropriate for applications that are inherently isolated in nature, such as selecting numbers from a keypad to be dialled or letters from an alphabet to be spelled. However motivation exists for users to employ proportional control methods in intrinsically analogue tasks such as the movement of a mouse pointer. This paper introduces an online BCI in which control of a mouse pointer is directly proportional to a user's intent. Performance is measured over a series of pointer movement tasks and compared to the traditional discrete output approach. Analogue control allowed subjects to move the pointer faster to the cued target location compared to discrete output but suffers more undesired movements overall. Best performance is achieved when combining the threshold to movement of traditional discrete techniques with the range of movement offered by proportional control.
Visualization of a radical B 12 enzyme with its G-protein chaperone
Jost, Marco; Cracan, Valentin; Hubbard, Paul A.; ...
2015-02-09
G-protein metallochaperones ensure fidelity during cofactor assembly for a variety of metalloproteins, including adenosylcobalamin (AdoCbl)-dependent methylmalonyl-CoA mutase and hydrogenase, and thus have both medical and biofuel development applications. In this paper, we present crystal structures of IcmF, a natural fusion protein of AdoCbl-dependent isobutyryl-CoA mutase and its corresponding G-protein chaperone, which reveal the molecular architecture of a G-protein metallochaperone in complex with its target protein. These structures show that conserved G-protein elements become ordered upon target protein association, creating the molecular pathways that both sense and report on the cofactor loading state. Structures determined of both apo- and holo-forms ofmore » IcmF depict both open and closed enzyme states, in which the cofactor-binding domain is alternatively positioned for cofactor loading and for catalysis. Finally and notably, the G protein moves as a unit with the cofactor-binding domain, providing a visualization of how a chaperone assists in the sequestering of a precious cofactor inside an enzyme active site.« less
Keenan, Kevin G; Huddleston, Wendy E; Ernest, Bradley E
2017-11-01
The purpose of the study was to determine the visual strategies used by older adults during a pinch grip task and to assess the relations between visual strategy, deficits in attention, and increased force fluctuations in older adults. Eye movements of 23 older adults (>65 yr) were monitored during a low-force pinch grip task while subjects viewed three common visual feedback displays. Performance on the Grooved Pegboard test and an attention task (which required no concurrent hand movements) was also measured. Visual strategies varied across subjects and depended on the type of visual feedback provided to the subjects. First, while viewing a high-gain compensatory feedback display (horizontal bar moving up and down with force), 9 of 23 older subjects adopted a strategy of performing saccades during the task, which resulted in 2.5 times greater force fluctuations in those that exhibited saccades compared with those who maintained fixation near the target line. Second, during pursuit feedback displays (force trace moving left to right across screen and up and down with force), all subjects exhibited multiple saccades, and increased force fluctuations were associated ( r s = 0.6; P = 0.002) with fewer saccades during the pursuit task. Also, decreased low-frequency (<4 Hz) force fluctuations and Grooved Pegboard times were significantly related ( P = 0.033 and P = 0.005, respectively) with higher (i.e., better) attention z scores. Comparison of these results with our previously published results in young subjects indicates that saccadic eye movements and attention are related to force control in older adults. NEW & NOTEWORTHY The significant contributions of the study are the addition of eye movement data and an attention task to explain differences in hand motor control across different visual displays in older adults. Older participants used different visual strategies across varying feedback displays, and saccadic eye movements were related with motor performance. In addition, those older individuals with deficits in attention had impaired motor performance on two different hand motor control tasks, including the Grooved Pegboard test. Copyright © 2017 the American Physiological Society.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Rhesus Monkeys Behave As If They Perceive the Duncker Illusion
Zivotofsky, A. Z.; Goldberg, M. E.; Powell, K. D.
2008-01-01
The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make. PMID:16102233
Barnes, G; Goodbody, S; Collins, S
1995-01-01
Ocular pursuit responses have been examined in humans in three experiments in which the pursuit target image has been fully or partially stabilised on the fovea by feeding a recorded eye movement signal back to drive the target motion. The objective was to establish whether subjects could volitionally control smooth eye movement to reproduce trajectories of target motion in the absence of a concurrent target motion stimulus. In experiment 1 subjects were presented with a target moving with a triangular waveform in the horizontal axis with a frequency of 0.325 Hz and velocities of +/- 10-50 degrees/s. The target was illuminated twice per cycle for pulse durations (PD) of 160-640 ms as it passed through the centre position; otherwise subjects were in darkness. Subjects initially tracked the target motion in a conventional closed-loop mode for four cycles. Prior to the next target presentation the target image was stabilised on the fovea, so that any target motion generated resulted solely from volitional eye movement. Subjects continued to make anticipatory smooth eye movements both to the left and the right with a velocity trajectory similar to that observed in the closed-loop phase. Peak velocity in the stabilised-image mode was highly correlated with that in the prior closed-loop phase, but was slightly less (84% on average). In experiment 2 subjects were presented with a continuously illuminated target that was oscillated sinusoidally at frequencies of 0.2-1.34 Hz and amplitudes of +/- 5-20 degrees. After four cycles of closed-loop stimulation the image was stabilised on the fovea at the time of peak target displacement. Subjects continued to generate an oscillatory smooth eye velocity pattern that mimicked the sinusoidal motion of the previous closed-loop phase for at least three further cycles. The peak eye velocity generated ranged from 57-95% of that in the closed-loop phase at frequencies up to 0.8 Hz but decreased significantly at 1.34 Hz. In experiment 3 subjects were presented with a stabilised display throughout and generated smooth eye movements with peak velocity up to 84 degrees/s in the complete absence of any prior external target motion stimulus, by transferring their attention alternately to left and right of the centre of the display. Eye velocity was found to be dependent on the eccentricity of the centre of attention and the frequency of alternation. When the target was partially stabilised on the retina by feeding back only a proportion (Kf = 0.6-0.9) of the eye movement signal to drive the target, subjects were still able to generate smooth movements at will, even though the display did not move as far or as fast as the eye. Peak eye velocity decreased as Kf decreased, suggesting that there was a continuous competitive interaction between the volitional drive and the visual feedback provided by the relative motion of the display with respect to the retina. These results support the evidence for two separate mechanisms of smooth eye movement control in ocular pursuit: reflex control from retinal velocity error feedback and volitional control from an internal source. Arguments are presented to indicate how smooth pursuit may be controlled by matching a voluntarily initiated estimate of the required smooth movement, normally derived from storage of past re-afferent information, against current visual feedback information. Such a mechanism allows preemptive smooth eye movements to be made that can overcome the inherent delays in the visual feedback pathway.
Human Visuospatial Updating After Passive Translations In Three-Dimensional Space
Klier, Eliana M.; Hess, Bernhard J. M.; Angelaki, Dora E.
2013-01-01
To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63 and 150cm in front of the cyclopean eye) as they moved 10cm left, right, up, down, forward or backward, while fixating a head-fixed target at 53cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84±0.28 (mean±SD), as compared to 0.51±0.33 for downward and 1.05±0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12±0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and inter-subject variabilities were smallest for near targets. Thus, in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy. PMID:18256164
Nonhuman Primate Studies to Advance Vision Science and Prevent Blindness.
Mustari, Michael J
2017-12-01
Most primate behavior is dependent on high acuity vision. Optimal visual performance in primates depends heavily upon frontally placed eyes, retinal specializations, and binocular vision. To see an object clearly its image must be placed on or near the fovea of each eye. The oculomotor system is responsible for maintaining precise eye alignment during fixation and generating eye movements to track moving targets. The visual system of nonhuman primates has a similar anatomical organization and functional capability to that of humans. This allows results obtained in nonhuman primates to be applied to humans. The visual and oculomotor systems of primates are immature at birth and sensitive to the quality of binocular visual and eye movement experience during the first months of life. Disruption of postnatal experience can lead to problems in eye alignment (strabismus), amblyopia, unsteady gaze (nystagmus), and defective eye movements. Recent studies in nonhuman primates have begun to discover the neural mechanisms associated with these conditions. In addition, genetic defects that target the retina can lead to blindness. A variety of approaches including gene therapy, stem cell treatment, neuroprosthetics, and optogenetics are currently being used to restore function associated with retinal diseases. Nonhuman primates often provide the best animal model for advancing fundamental knowledge and developing new treatments and cures for blinding diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of the National Academy of Sciences. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Command Wire Sensor Measurements
2012-09-01
coupled with the extreme harsh terrain has meant that few of these techniques have proved robust enough when moved from the laboratory to the field...to image stationary objects and does not accurately image moving targets. Moving targets can be seriously distorted and displaced from their true...battlefield and for imaging of fixed targets. Moving targets can be detected with a SAR if they have a Doppler frequency shift greater than the
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Motor Effects from Visually Induced Disorientation in Man.
ERIC Educational Resources Information Center
Brecher, M. Herbert; Brecher, Gerhard A.
The problem of disorientation in a moving optical environment was examined. A pilot can experience egocentric disorientation if the entire visual environment moves relative to his body without a clue as to the objectives position of the airplane in respect to the ground. A simple method of measuring disorientation was devised. In this method…
Intercepting a sound without vision
Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica
2017-01-01
Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939
Context cue-dependent saccadic adaptation in rhesus macaques cannot be elicited using color
Smalianchuk, Ivan; Khanna, Sanjeev B.; Smith, Matthew A.; Gandhi, Neeraj J.
2015-01-01
When the head does not move, rapid movements of the eyes called saccades are used to redirect the line of sight. Saccades are defined by a series of metrical and kinematic (evolution of a movement as a function of time) relationships. For example, the amplitude of a saccade made from one visual target to another is roughly 90% of the distance between the initial fixation point (T0) and the peripheral target (T1). However, this stereotypical relationship between saccade amplitude and initial retinal error (T1-T0) may be altered, either increased or decreased, by surreptitiously displacing a visual target during an ongoing saccade. This form of motor learning (called saccadic adaptation) has been described in both humans and monkeys. Recent experiments in humans and monkeys have suggested that internal (proprioceptive) and external (target shape, color, and/or motion) cues may be used to produce context-dependent adaptation. We tested the hypothesis that an external contextual cue (target color) could be used to evoke differential gain (actual saccade/initial retinal error) states in rhesus monkeys. We did not observe differential gain states correlated with target color regardless of whether targets were displaced along the same vector as the primary saccade or perpendicular to it. Furthermore, this observation held true regardless of whether adaptation trials using various colors and intrasaccade target displacements were randomly intermixed or presented in short or long blocks of trials. These results are consistent with hypotheses that state that color cannot be used as a contextual cue and are interpreted in light of previous studies of saccadic adaptation in both humans and monkeys. PMID:25995353
Extrapolation of vertical target motion through a brief visual occlusion.
Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco
2010-03-01
It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.
Tracking the impact of depression in a perspective-taking task.
Ferguson, Heather J; Cane, James
2017-11-01
Research has identified impairments in Theory of Mind (ToM) abilities in depressed patients, particularly in relation to tasks involving empathetic responses and belief reasoning. We aimed to build on this research by exploring the relationship between depressed mood and cognitive ToM, specifically visual perspective-taking ability. High and low depressed participants were eye-tracked as they completed a perspective-taking task, in which they followed the instructions of a 'director' to move target objects (e.g. a "teapot with spots on") around a grid, in the presence of a temporarily-ambiguous competitor object (e.g. a "teapot with stars on"). Importantly, some of the objects in the grid were occluded from the director's (but not the participant's) view. Results revealed no group-based difference in participants' ability to use perspective cues to identify the target object. All participants were faster to select the target object when the competitor was only available to the participant, compared to when the competitor was mutually available to the participant and director. Eye-tracking measures supported this pattern, revealing that perspective directed participants' visual search immediately upon hearing the ambiguous object's name (e.g. "teapot"). We discuss how these results fit with previous studies that have shown a negative relationship between depression and ToM.
Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.
2010-01-01
Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503
Ivancevich, Nikolas M; Dahl, Jeremy J; Smith, Stephen W
2009-10-01
Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively.
Effects of parietal injury on covert orienting of attention.
Posner, M I; Walker, J A; Friedrich, F J; Rafal, R D
1984-07-01
The cognitive act of shifting attention from one place in the visual field to another can be accomplished covertly without muscular changes. The act can be viewed in terms of three internal mental operations: disengagement of attention from its current focus, moving attention to the target, and engagement of the target. Our results show that damage to the parietal lobe produces a deficit in the disengage operation when the target is contralateral to the lesion. Effects may also be found on engagement with the target. The effects of brain injury on disengagement of attention seem to be unique to the parietal lobe and do not appear to occur with our frontal, midbrain, and temporal control series. These results confirm the close connection between parietal lobes and selective attention suggested by single cell recording. They indicate more specifically the role that parietal function has on attention and suggest one mechanism of the effects of parietal lesions reported in clinical neurology.
NASA Astrophysics Data System (ADS)
Anderson, Monica; David, Phillip
2007-04-01
Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Bowles, R. L.
1983-01-01
This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.
Temporal order judgments are disrupted more by reflexive than by voluntary saccades.
Yabe, Yoshiko; Goodale, Melvyn A; Shigemasu, Hiroaki
2014-05-01
We do not always perceive the sequence of events as they actually unfold. For example, when two events occur before a rapid eye movement (saccade), the interval between them is often perceived as shorter than it really is and the order of those events can be sometimes reversed (Morrone MC, Ross J, Burr DC. Nat Neurosci 8: 950-954, 2005). In the present article we show that these misperceptions of the temporal order of events critically depend on whether the saccade is reflexive or voluntary. In the first experiment, participants judged the temporal order of two visual stimuli that were presented one after the other just before a reflexive or voluntary saccadic eye movement. In the reflexive saccade condition, participants moved their eyes to a target that suddenly appeared. In the voluntary saccade condition, participants moved their eyes to a target that was present already. Similarly to the above-cited study, we found that the temporal order of events was often misjudged just before a reflexive saccade to a suddenly appearing target. However, when people made a voluntary saccade to a target that was already present, there was a significant reduction in the probability of misjudging the temporal order of the same events. In the second experiment, the reduction was seen in a memory-delay task. It is likely that the nature of the motor command and its origin determine how time is perceived during the moments preceding the motor act. Copyright © 2014 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Scopatz, Stephen D.; Mendez, Michael; Trent, Randall
2015-05-01
The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.
Trejo, Leonard J; Rosipal, Roman; Matthews, Bryan
2006-06-01
We have developed and tested two electroencephalogram (EEG)-based brain-computer interfaces (BCI) for users to control a cursor on a computer display. Our system uses an adaptive algorithm, based on kernel partial least squares classification (KPLS), to associate patterns in multichannel EEG frequency spectra with cursor controls. Our first BCI, Target Practice, is a system for one-dimensional device control, in which participants use biofeedback to learn voluntary control of their EEG spectra. Target Practice uses a KPLS classifier to map power spectra of 62-electrode EEG signals to rightward or leftward position of a moving cursor on a computer display. Three subjects learned to control motion of a cursor on a video display in multiple blocks of 60 trials over periods of up to six weeks. The best subject's average skill in correct selection of the cursor direction grew from 58% to 88% after 13 training sessions. Target Practice also implements online control of two artifact sources: 1) removal of ocular artifact by linear subtraction of wavelet-smoothed vertical and horizontal electrooculograms (EOG) signals, 2) control of muscle artifact by inhibition of BCI training during periods of relatively high power in the 40-64 Hz band. The second BCI, Think Pointer, is a system for two-dimensional cursor control. Steady-state visual evoked potentials (SSVEP) are triggered by four flickering checkerboard stimuli located in narrow strips at each edge of the display. The user attends to one of the four beacons to initiate motion in the desired direction. The SSVEP signals are recorded from 12 electrodes located over the occipital region. A KPLS classifier is individually calibrated to map multichannel frequency bands of the SSVEP signals to right-left or up-down motion of a cursor on a computer display. The display stops moving when the user attends to a central fixation point. As for Target Practice, Think Pointer also implements wavelet-based online removal of ocular artifact; however, in Think Pointer muscle artifact is controlled via adaptive normalization of the SSVEP. Training of the classifier requires about 3 min. We have tested our system in real-time operation in three human subjects. Across subjects and sessions, control accuracy ranged from 80% to 100% correct with lags of 1-5 s for movement initiation and turning. We have also developed a realistic demonstration of our system for control of a moving map display (http://ti.arc.nasa.gov/).
Pilots' Visual Scan Patterns and Attention Distribution During the Pursuit of a Dynamic Target.
Yu, Chung-San; Wang, Eric Min-Yang; Li, Wen-Chin; Braithwaite, Graham; Greaves, Matthew
2016-01-01
The current research was to investigate pilots' visual scan patterns in order to assess attention distribution during air-to-air maneuvers. A total of 30 qualified mission-ready fighter pilots participated in this research. Eye movement data were collected by a portable head-mounted eye-tracking device, combined with a jet fighter simulator. To complete the task, pilots had to search for, pursue, and lock on a moving target while performing air-to-air tasks. There were significant differences in pilots' saccade duration (ms) in three operating phases, including searching (M = 241, SD = 332), pursuing (M = 311, SD = 392), and lock-on (M = 191, SD = 226). Also, there were significant differences in pilots' pupil sizes (pixel(2)), of which the lock-on phase was the largest (M = 27,237, SD = 6457), followed by pursuit (M = 26,232, SD = 6070), then searching (M = 25,858, SD = 6137). Furthermore, there were significant differences between expert and novice pilots in the percentage of fixation on the head-up display (HUD), time spent looking outside the cockpit, and the performance of situational awareness (SA). Experienced pilots have better SA performance and paid more attention to the HUD, but focused less outside the cockpit when compared with novice pilots. Furthermore, pilots with better SA performance exhibited a smaller pupil size during the operational phase of lock on while pursuing a dynamic target. Understanding pilots' visual scan patterns and attention distribution are beneficial to the design of interface displays in the cockpit and in developing human factors training syllabi to improve the safety of flight operations.
The Syntax of Moving Images: Principles and Applications.
ERIC Educational Resources Information Center
Metallinos, Nikos
This paper examines the various theories of motion relating to visual communication media, discusses the syntactic rules of moving images derived from those of still pictures, and underlines the motions employed in the construction of moving images, primarily television pictures. The following theories of motion and moving images are presented:…
NASA Astrophysics Data System (ADS)
Zou, Tianhao; Zuo, Zhengrong
2018-02-01
Target detection is a very important and basic problem of computer vision and image processing. The most often case we meet in real world is a detection task for a moving-small target on moving platform. The commonly used methods, such as Registration-based suppression, can hardly achieve a desired result. To crack this hard nut, we introduce a Global-local registration based suppression method. Differ from the traditional ones, the proposed Global-local Registration Strategy consider both the global consistency and the local diversity of the background, obtain a better performance than normal background suppression methods. In this paper, we first discussed the features about the small-moving target detection on unstable platform. Then we introduced a new strategy and conducted an experiment to confirm its noisy stability. In the end, we confirmed the background suppression method based on global-local registration strategy has a better perform in moving target detection on moving platform.
Visual and somatic sensory feedback of brain activity for intuitive surgical robot manipulation.
Miura, Satoshi; Matsumoto, Yuya; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G
2015-01-01
This paper presents a method to evaluate the hand-eye coordination of the master-slave surgical robot by measuring the activation of the intraparietal sulcus in users brain activity during controlling virtual manipulation. The objective is to examine the changes in activity of the intraparietal sulcus when the user's visual or somatic feedback is passed through or intercepted. The hypothesis is that the intraparietal sulcus activates significantly when both the visual and somatic sense pass feedback, but deactivates when either visual or somatic is intercepted. The brain activity of three subjects was measured by the functional near-infrared spectroscopic-topography brain imaging while they used a hand controller to move a virtual arm of a surgical simulator. The experiment was performed several times with three conditions: (i) the user controlled the virtual arm naturally under both visual and somatic feedback passed, (ii) the user moved with closed eyes under only somatic feedback passed, (iii) the user only gazed at the screen under only visual feedback passed. Brain activity showed significantly better control of the virtual arm naturally (p<;0.05) when compared with moving with closed eyes or only gazing among all participants. In conclusion, the brain can activate according to visual and somatic sensory feedback agreement.
De Sá Teixeira, Nuno
2016-01-01
Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object’s offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth’s gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects’ location. PMID:26910260
De Sá Teixeira, Nuno
2016-01-01
Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object's offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth's gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects' location.
Role of Alpha-Band Oscillations in Spatial Updating across Whole Body Motion
Gutteling, Tjerk P.; Medendorp, W. P.
2016-01-01
When moving around in the world, we have to keep track of important locations in our surroundings. In this process, called spatial updating, we must estimate our body motion and correct representations of memorized spatial locations in accordance with this motion. While the behavioral characteristics of spatial updating across whole body motion have been studied in detail, its neural implementation lacks detailed study. Here we use electroencephalography (EEG) to distinguish various spectral components of this process. Subjects gazed at a central body-fixed point in otherwise complete darkness, while a target was briefly flashed, either left or right from this point. Subjects had to remember the location of this target as either moving along with the body or remaining fixed in the world while being translated sideways on a passive motion platform. After the motion, subjects had to indicate the remembered target location in the instructed reference frame using a mouse response. While the body motion, as detected by the vestibular system, should not affect the representation of body-fixed targets, it should interact with the representation of a world-centered target to update its location relative to the body. We show that the initial presentation of the visual target induced a reduction of alpha band power in contralateral parieto-occipital areas, which evolved to a sustained increase during the subsequent memory period. Motion of the body led to a reduction of alpha band power in central parietal areas extending to lateral parieto-temporal areas, irrespective of whether the targets had to be memorized relative to world or body. When updating a world-fixed target, its internal representation shifts hemispheres, only when subjects’ behavioral responses suggested an update across the body midline. Our results suggest that parietal cortex is involved in both self-motion estimation and the selective application of this motion information to maintaining target locations as fixed in the world or fixed to the body. PMID:27199882
2017-01-01
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553
Zhu, Lin L; Beauchamp, Michael S
2017-03-08
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.
On the Adaptation of Pelvic Motion by Applying 3-dimensional Guidance Forces Using TPAD.
Kang, Jiyeon; Vashista, Vineet; Agrawal, Sunil K
2017-09-01
Pelvic movement is important to human locomotion as the center of mass is located near the center of pelvis. Lateral pelvic motion plays a crucial role to shift the center of mass on the stance leg, while swinging the other leg and keeping the body balanced. In addition, vertical pelvic movement helps to reduce metabolic energy expenditure by exchanging potential and kinetic energy during the gait cycle. However, patient groups with cerebral palsy or stroke have excessive pelvic motion that leads to high energy expenditure. In addition, they have higher chances of falls as the center ofmass could deviate outside the base of support. In this paper, a novel control method is suggested using tethered pelvic assist device (TPAD) to teach subjects to walk with a specified target pelvic trajectory while walking on a treadmill. In this method, a force field is applied to the pelvis to guide it to move on a target trajectory and correctional forces are applied, if the pelvis motion has excessive deviations from the target trajectory. Three different experimentswith healthy subjects were conducted to teach them to walk on a new target pelvic trajectory with the presented control method. For all three experiments, the baseline trajectory of the pelvis was experimentally determined for each participating subject. To design a target pelvic trajectory which is different from the baseline, Experiment I scaled up the lateral component of the baseline pelvic trajectory, while Experiment II scaled down the lateral component of the baseline trajectory. For both Experiments I and II, the controller generated a 2-D force field in the transverse plane to provide the guidance force. In this paper, seven subjects were recruited for each experiment who walked on the treadmill with suggested control methods and visual feedback of their pelvic trajectory. The results show that the subjects were able to learn the target pelvic trajectory in each experiment and also retained the training effects after the completion of the experiment. In Experiment III, both lateral and vertical components of the pelvic trajectory were scaled down from the baseline trajectory. The force field was extended to three dimensions in order to correct the vertical pelvic movement as well. Three subgroups (force feedback alone, visual feedback alone, and both force and visual feedback) were recruited to understand the effects of force feedback and visual feedback alone to distinguish the results from Experiments I and II. The results showthat a trainingmethod that combines visual and force feedback is superior to the training methods with visual or force feedback alone. We believe that the present control strategy holds potential in training and correcting abnormal pelvic movements in different patient populations.
The effect of visual context on manual localization of remembered targets
NASA Technical Reports Server (NTRS)
Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.
1997-01-01
This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.
NASA Astrophysics Data System (ADS)
Chen, Ho-Hsing; Wu, Jay; Chuang, Keh-Shih; Kuo, Hsiang-Chi
2007-07-01
Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice.
Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.
Wiemers, Michael; Fischer, Martin H
2016-01-01
Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.
Tracking moving identities: after attending the right location, the identity does not come for free.
Pinto, Yaïr; Scholte, H Steven; Lamme, V A F
2012-01-01
Although tracking identical moving objects has been studied since the 1980's, only recently the study into tracking moving objects with distinct identities has started (referred to as Multiple Identity Tracking, MIT). So far, only behavioral studies into MIT have been undertaken. These studies have left a fundamental question regarding MIT unanswered, is MIT a one-stage or a two-stage process? According to the one-stage model, after a location has been attended, the identity is released without effort. However, according to the two-stage model, there are two effortful stages in MIT, attending to a location, and attending to the identity of the object at that location. In the current study we investigated this question by measuring brain activity in response to tracking familiar and unfamiliar targets. Familiarity is known to automate effortful processes, so if attention to identify the object is needed, this should become easier. However, if no such attention is needed, familiarity can only affect other processes (such as memory for the target set). Our results revealed that on unfamiliar trials neural activity was higher in both attentional networks, and visual identification networks. These results suggest that familiarity in MIT automates attentional identification processes, thus suggesting that attentional identification is needed in MIT. This then would imply that MIT is essentially a two-stage process, since after attending the location, the identity does not seem to come for free.
NASA Astrophysics Data System (ADS)
Iwasaki, Ryosuke; Takagi, Ryo; Tomiyasu, Kentaro; Yoshizawa, Shin; Umemura, Shin-ichiro
2017-07-01
The targeting of the ultrasound beam and the prediction of thermal lesion formation in advance are the requirements for monitoring high-intensity focused ultrasound (HIFU) treatment with safety and reproducibility. To visualize the HIFU focal zone, we utilized an acoustic radiation force impulse (ARFI) imaging-based method. After inducing displacements inside tissues with pulsed HIFU called the push pulse exposure, the distribution of axial displacements started expanding and moving. To acquire RF data immediately after and during the HIFU push pulse exposure to improve prediction accuracy, we attempted methods using extrapolation estimation and applying HIFU noise elimination. The distributions going back in the time domain from the end of push pulse exposure are in good agreement with tissue coagulation at the center. The results suggest that the proposed focal zone visualization employing pulsed HIFU entailing the high-speed ARFI imaging method is useful for the prediction of thermal coagulation in advance.
Control of humanoid robot via motion-onset visual evoked potentials
Li, Wei; Li, Mengfan; Zhao, Jing
2015-01-01
This paper investigates controlling humanoid robot behavior via motion-onset specific N200 potentials. In this study, N200 potentials are induced by moving a blue bar through robot images intuitively representing robot behaviors to be controlled with mind. We present the individual impact of each subject on N200 potentials and discuss how to deal with individuality to obtain a high accuracy. The study results document the off-line average accuracy of 93% for hitting targets across over five subjects, so we use this major component of the motion-onset visual evoked potential (mVEP) to code people's mental activities and to perform two types of on-line operation tasks: navigating a humanoid robot in an office environment with an obstacle and picking-up an object. We discuss the factors that affect the on-line control success rate and the total time for completing an on-line operation task. PMID:25620918
Visual context modulates potentiation of grasp types during semantic object categorization.
Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J
2014-06-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.
Local and Global Correlations between Neurons in the Middle Temporal Area of Primate Visual Cortex.
Solomon, Selina S; Chen, Spencer C; Morley, John W; Solomon, Samuel G
2015-09-01
In humans and other primates, the analysis of visual motion includes populations of neurons in the middle-temporal (MT) area of visual cortex. Motion analysis will be constrained by the structure of neural correlations in these populations. Here, we use multi-electrode arrays to measure correlations in anesthetized marmoset, a New World monkey where area MT lies exposed on the cortical surface. We measured correlations in the spike count between pairs of neurons and within populations of neurons, for moving dot fields and moving gratings. Correlations were weaker in area MT than in area V1. The magnitude of correlations in area MT diminished with distance between receptive fields, and difference in preferred direction. Correlations during presentation of moving gratings were stronger than those during presentation of moving dot fields, extended further across cortex, and were less dependent on the functional properties of neurons. Analysis of the timescales of correlation suggests presence of 2 mechanisms. A local mechanism, associated with near-synchronous spiking activity, is strongest in nearby neurons with similar direction preference and is independent of visual stimulus. A global mechanism, operating over larger spatial scales and longer timescales, is independent of direction preference and is modulated by the type of visual stimulus presented. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Visual context modulates potentiation of grasp types during semantic object categorization
Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.
2013-01-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270
Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.
Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E
2013-08-01
Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.
Sensory factors limiting horizontal and vertical visual span for letter recognition
Yu, Deyue; Legge, Gordon E.; Wagoner, Gunther; Chung, Susana T. L.
2014-01-01
Reading speed for English text is slower for text oriented vertically than horizontally. Yu, Park, Gerold, and Legge (2010) showed that slower reading of vertical text is associated with a smaller visual span (the number of letters recognized with high accuracy without moving the eyes). Three possible sensory determinants of the size of the visual span are: resolution (decreasing acuity at letter positions farther from the midline), mislocations (uncertainty about the relative position of letters in strings), and crowding (interference from flanking letters in recognizing the target letter). In the present study, we asked which of these factors is most important in determining the size of the visual span, and likely in turn in determining the horizontal/vertical difference in reading when letter size is above the critical print size for reading. We used a decomposition analysis to represent constraints due to resolution, mislocations, and crowding as losses in information transmitted (in bits) about letter recognition. Across vertical and horizontal conditions, crowding accounted for 75% of the loss in information, mislocations accounted for 19% of the loss, and declining acuity away from fixation accounted for only 6%. We conclude that crowding is the major factor limiting the size of the visual span, and that the horizontal/vertical difference in the size of the visual span is associated with stronger crowding along the vertical midline. PMID:25187253
Sensory factors limiting horizontal and vertical visual span for letter recognition
Yu, Deyue; Legge, Gordon E.; Wagoner, Gunther; Chung, Susana T. L.
2014-01-01
Reading speed for English text is slower for text oriented vertically than horizontally. Yu, Park, Gerold, and Legge (2010) showed that slower reading of vertical text is associated with a smaller visual span (the number of letters recognized with high accuracy without moving the eyes). Three possible sensory determinants of the size of the visual span are: resolution (decreasing acuity at letter positions farther from the midline), mislocations (uncertainty about the relative position of letters in strings), and crowding (interference from flanking letters in recognizing the target letter). In the present study, we asked which of these factors is most important in determining the size of the visual span, and likely in turn in determining the horizontal/vertical difference in reading when letter size is above the critical print size for reading. We used a decomposition analysis to represent constraints due to resolution, mislocations, and crowding as losses in information transmitted (in bits) about letter recognition. Across vertical and horizontal conditions, crowding accounted for 75% of the loss in information, mislocations accounted for 19% of the loss, and declining acuity away from fixation accounted for only 6%. We conclude that crowding is the major factor limiting the size of the visual span, and that the horizontal/vertical difference in the size of the visual span is associated with stronger crowding along the vertical midline.
Visualization of pass-by noise by means of moving frame acoustic holography.
Park, S H; Kim, Y H
2001-11-01
The noise generated by pass-by test (ISO 362) was visualized. The moving frame acoustic holography was improved to visualize the pass-by noise and predict its level. The proposed method allowed us to visualize tire and engine noise generated by pass-by test based on the following assumption; the noise can be assumed to be quasistationary. This is first because the speed change during the period of our interest is negligible and second because the frequency change of the noise is also negligible. The proposed method was verified by a controlled loud speaker experiment. Effects of running condition, e.g., accelerating according to ISO 362, cruising at constant speed, and coasting down, on the radiated noise were also visualized. The visualized results show where the tire noise is generated and how it propagates.
Viewer-centered and body-centered frames of reference in direct visuomotor transformations.
Carrozzo, M; McIntyre, J; Zago, M; Lacquaniti, F
1999-11-01
It has been hypothesized that the end-point position of reaching may be specified in an egocentric frame of reference. In most previous studies, however, reaching was toward a memorized target, rather than an actual target. Thus, the role played by sensorimotor transformation could not be disassociated from the role played by storage in short-term memory. In the present study the direct process of sensorimotor transformation was investigated in reaching toward continuously visible targets that need not be stored in memory. A virtual reality system was used to present visual targets in different three-dimensional (3D) locations in two different tasks, one with visual feedback of the hand and arm position (Seen Hand) and the other without such feedback (Unseen Hand). In the Seen Hand task, the axes of maximum variability and of maximum contraction converge toward the mid-point between the eyes. In the Unseen Hand task only the maximum contraction correlates with the sight-line and the axes of maximum variability are not viewer-centered but rotate anti-clockwise around the body and the effector arm during the move from the right to the left workspace. The bulk of findings from these and previous experiments support the hypothesis of a two-stage process, with a gradual transformation from viewer-centered to body-centered and arm-centered coordinates. Retinal, extra-retinal and arm-related signals appear to be progressively combined in superior and inferior parietal areas, giving rise to egocentric representations of the end-point position of reaching.
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
Azizi, Elham; Abel, Larry A; Stainer, Matthew J
2017-02-01
Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.
Helland, Magne; Horgen, Gunnar; Kvikstad, Tor Martin; Garthus, Tore; Aarås, Arne
2011-11-01
This study investigated the effect of moving from small offices to a landscape environment for 19 Visual Display Unit (VDU) operators at Alcatel Denmark AS. The operators reported significantly improved lighting condition and glare situation. Further, visual discomfort was also significantly reduced on a Visual Analogue Scale (VAS). There was no significant correlation between lighting condition and visual discomfort neither in the small offices nor in the office landscape. However, visual discomfort correlated significantly with glare in small offices i.e. more glare is related to more visual discomfort. This correlation disappeared after the lighting system in the office landscape had been improved. There was also a significant correlation between glare and itching of the eyes as well as blurred vision in the small offices, i.e. more glare more visual symptoms. Experience of pain was found to reduce the subjective assessment of work capacity during VDU tasks. There was a significant correlation between visual discomfort and reduced work capacity in small offices and in the office landscape. When moving from the small offices to the office landscape, there was a significant reduction in headache as well as back pain. No significant changes in pain intensity in the neck, shoulder, forearm, and wrist/hand were observed. The pain levels in different body areas were significantly correlated with subjective assessment of reduced work capacity in small offices and in the office landscape. By careful design and construction of an office landscape with regard to lighting and visual conditions, transfer from small offices may be acceptable from a visual-ergonomic point of view. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Realism and Effectiveness of Robotic Moving Targets
2017-04-01
scenario or be manually controlled . The targets can communicate with other nearby targets, which means they can move independently, as a group , or...present a realistic three- dimensional human-sized target that can freely move with semi-autonomous control . The U.S. Army Research Institute for...Procedure: Performance and survey data were collected during multiple training exercises from Soldiers who engaged the RHTTs. Different groups
Human image tracking technique applied to remote collaborative environments
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Suzuki, Gen
1993-10-01
To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.
Space moving target detection using time domain feature
NASA Astrophysics Data System (ADS)
Wang, Min; Chen, Jin-yong; Gao, Feng; Zhao, Jin-yu
2018-01-01
The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects (target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10-5, which outperforms those of compared algorithms.
A new standard of visual data representation for imaging mass spectrometry.
O'Rourke, Matthew B; Padula, Matthew P
2017-03-01
MALDI imaging MS (IMS) is principally used for cancer diagnostics. In our own experience with publishing IMS data, we have been requested to modify our protocols with respect to the areas of the tissue that are imaged in order to comply with the wider literature. In light of this, we have determined that current methodologies lack effective controls and can potentially introduce bias by only imaging specific areas of the targeted tissue EXPERIMENTAL DESIGN: A previously imaged sample was selected and then cropped in different ways to show the potential effect of only imaging targeted areas. By using a model sample, we were able to effectively show how selective imaging of samples can misinterpret tissue features and by changing the areas that are acquired, according to our new standard, an effective internal control can be introduced. Current IMS sampling convention relies on the assumption that sample preparation has been performed correctly. This prevents users from checking whether molecules have moved beyond borders of the tissue due to delocalization and consequentially products of improper sample preparation could be interpreted as biological features that are of critical importance when encountered in a visual diagnostic. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Augmented Endoscopic Images Overlaying Shape Changes in Bone Cutting Procedures.
Nakao, Megumi; Endo, Shota; Nakao, Shinichi; Yoshida, Munehito; Matsuda, Tetsuya
2016-01-01
In microendoscopic discectomy for spinal disorders, bone cutting procedures are performed in tight spaces while observing a small portion of the target structures. Although optical tracking systems are able to measure the tip of the surgical tool during surgery, the poor shape information available during surgery makes accurate cutting difficult, even if preoperative computed tomography and magnetic resonance images are used for reference. Shape estimation and visualization of the target structures are essential for accurate cutting. However, time-varying shape changes during cutting procedures are still challenging issues for intraoperative navigation. This paper introduces a concept of endoscopic image augmentation that overlays shape changes to support bone cutting procedures. This framework handles the history of the location of the measured drill tip as a volume label and visualizes the remains to be cut overlaid on the endoscopic image in real time. A cutting experiment was performed with volunteers, and the feasibility of this concept was examined using a clinical navigation system. The efficacy of the cutting aid was evaluated with respect to the shape similarity, total moved distance of a cutting tool, and required cutting time. The results of the experiments showed that cutting performance was significantly improved by the proposed framework.
Ben-Simon, Avi; Ben-Shahar, Ohad; Vasserman, Genadiy; Segev, Ronen
2012-12-15
Interception of fast-moving targets is a demanding task many animals solve. To handle it successfully, mammals employ both saccadic and smooth pursuit eye movements in order to confine the target to their area centralis. But how can non-mammalian vertebrates, which lack smooth pursuit, intercept moving targets? We studied this question by exploring eye movement strategies employed by archer fish, an animal that possesses an area centralis, lacks smooth pursuit eye movements, but can intercept moving targets by shooting jets of water at them. We tracked the gaze direction of fish during interception of moving targets and found that they employ saccadic eye movements based on prediction of target position when it is hit. The fish fixates on the target's initial position for ∼0.2 s from the onset of its motion, a time period used to predict whether a shot can be made before the projection of the target exits the area centralis. If the prediction indicates otherwise, the fish performs a saccade that overshoots the center of gaze beyond the present target projection on the retina, such that after the saccade the moving target remains inside the area centralis long enough to prepare and perform a shot. These results add to the growing body of knowledge on biological target tracking and may shed light on the mechanism underlying this behavior in other animals with no neural system for the generation of smooth pursuit eye movements.
Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.
Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W
2016-12-14
The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.
A Competition Model of Exogenous Orienting in 3.5-Month-Old Infants.
ERIC Educational Resources Information Center
Dannemiller, James L.
1998-01-01
Four experiments examined exogenous orienting in 3.5-month-olds. Found that sensitivity to a small moving bar was lower when most of the red bars were in the visual field contra-lateral to this probe. The distribution of color within the visual field biased attention, making it either more or less likely that the infant detected a moving stimulus.…
Nicotinic Receptor Gene CHRNA4 Interacts with Processing Load in Attention
Espeseth, Thomas; Sneve, Markus Handal; Rootwelt, Helge; Laeng, Bruno
2010-01-01
Background Pharmacological studies suggest that cholinergic neurotransmission mediates increases in attentional effort in response to high processing load during attention demanding tasks [1]. Methodology/Principal Findings In the present study we tested whether individual variation in CHRNA4, a gene coding for a subcomponent in α4β2 nicotinic receptors in the human brain, interacted with processing load in multiple-object tracking (MOT) and visual search (VS). We hypothesized that the impact of genotype would increase with greater processing load in the MOT task. Similarly, we predicted that genotype would influence performance under high but not low load in the VS task. Two hundred and two healthy persons (age range = 39–77, Mean = 57.5, SD = 9.4) performed the MOT task in which twelve identical circular objects moved about the display in an independent and unpredictable manner. Two to six objects were designated as targets and the remaining objects were distracters. The same observers also performed a visual search for a target letter (i.e. X or Z) presented together with five non-targets while ignoring centrally presented distracters (i.e. X, Z, or L). Targets differed from non-targets by a unique feature in the low load condition, whereas they shared features in the high load condition. CHRNA4 genotype interacted with processing load in both tasks. Homozygotes for the T allele (N = 62) had better tracking capacity in the MOT task and identified targets faster in the high load trials of the VS task. Conclusion The results support the hypothesis that the cholinergic system modulates attentional effort, and that common genetic variation can be used to study the molecular biology of cognition. PMID:21203548
Anticipatory Smooth Eye Movements in Autism Spectrum Disorder
Aitkin, Cordelia D.; Santos, Elio M.; Kowler, Eileen
2013-01-01
Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations. PMID:24376667
Anticipatory smooth eye movements in autism spectrum disorder.
Aitkin, Cordelia D; Santos, Elio M; Kowler, Eileen
2013-01-01
Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations.
2009-12-01
facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . We use the fact that all the...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect ...facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . Based on the above two
Semantics of directly manipulating spatializations.
Hu, Xinran; Bradel, Lauren; Maiti, Dipayan; House, Leanna; North, Chris; Leman, Scotland
2013-12-01
When high-dimensional data is visualized in a 2D plane by using parametric projection algorithms, users may wish to manipulate the layout of the data points to better reflect their domain knowledge or to explore alternative structures. However, few users are well-versed in the algorithms behind the visualizations, making parameter tweaking more of a guessing game than a series of decisive interactions. Translating user interactions into algorithmic input is a key component of Visual to Parametric Interaction (V2PI) [13]. Instead of adjusting parameters, users directly move data points on the screen, which then updates the underlying statistical model. However, we have found that some data points that are not moved by the user are just as important in the interactions as the data points that are moved. Users frequently move some data points with respect to some other 'unmoved' data points that they consider as spatially contextual. However, in current V2PI interactions, these points are not explicitly identified when directly manipulating the moved points. We design a richer set of interactions that makes this context more explicit, and a new algorithm and sophisticated weighting scheme that incorporates the importance of these unmoved data points into V2PI.
Marine Targets Classification in PolInSAR Data
NASA Astrophysics Data System (ADS)
Chen, Peng; Yang, Jingsong; Ren, Lin
2014-11-01
In this paper, marine stationary targets and moving targets are studied by Pol-In-SAR data of Radarsat-2. A new method of stationary targets detection is proposed. The method get the correlation coefficient image of the In-SAR data, and using the histogram of correlation coefficient image. Then, A Constant False Alarm Rate (CFAR) algorithm and The Probabilistic Neural Network model are imported to detect stationary targets. To find the moving targets, Azimuth Ambiguity is show as an important feature. We use the length of azimuth ambiguity to get the target's moving direction and speed. Make further efforts, Targets classification is studied by rebuild the surface elevation of marine targets.
Marine Targets Classification in PolInSAR Data
NASA Astrophysics Data System (ADS)
Chen, Peng; Yang, Jingsong; Ren, Lin
2014-11-01
In this paper, marine stationary targets and moving targets are studied by Pol-In-SAR data of Radarsat-2. A new method of stationary targets detection is proposed. The method get the correlation coefficient image of the In-SAR data, and using the histogram of correlation coefficient image. Then , A Constant False Alarm Rate (CFAR) algorithm and The Probabilistic Neural Network model are imported to detect stationary targets. To find the moving targets, Azimuth Ambiguity is show as an important feature. We use the length of azimuth ambiguity to get the target's moving direction and speed. Make further efforts, Targets classification is studied by rebuild the surface elevation of marine targets.
Development and learning of saccadic eye movements in 7- to 42-month-old children.
Alahyane, Nadia; Lemoine-Lardennois, Christelle; Tailhefer, Coline; Collins, Thérèse; Fagard, Jacqueline; Doré-Mazars, Karine
2016-01-01
From birth, infants move their eyes to explore their environment, interact with it, and progressively develop a multitude of motor and cognitive abilities. The characteristics and development of oculomotor control in early childhood remain poorly understood today. Here, we examined reaction time and amplitude of saccadic eye movements in 93 7- to 42-month-old children while they oriented toward visual animated cartoon characters appearing at unpredictable locations on a computer screen over 140 trials. Results revealed that saccade performance is immature in children compared to a group of adults: Saccade reaction times were longer, and saccade amplitude relative to target location (10° eccentricity) was shorter. Results also indicated that performance is flexible in children. Although saccade reaction time decreased as age increased, suggesting developmental improvements in saccade control, saccade amplitude gradually improved over trials. Moreover, similar to adults, children were able to modify saccade amplitude based on the visual error made in the previous trial. This second set of results suggests that short visual experience and/or rapid sensorimotor learning are functional in children and can also affect saccade performance.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
The Neural Correlates of Inhibiting Pursuit to Smoothly Moving Targets
ERIC Educational Resources Information Center
Burke, Melanie Rose; Barnes, Graham R.
2011-01-01
A previous study has shown that actively pursuing a moving target provides a predictive motor advantage when compared with passive observation of the moving target while keeping the eyes still [Burke, M. R., & Barnes, G. R. Anticipatory eye movements evoked after active following versus passive observation of a predictable motion stimulus. "Brain…
Wang, Zhirui; Xu, Jia; Huang, Zuzhen; Zhang, Xudong; Xia, Xiang-Gen; Long, Teng; Bao, Qian
2016-03-16
To detect and estimate ground slowly moving targets in airborne single-channel synthetic aperture radar (SAR), a road-aided ground moving target indication (GMTI) algorithm is proposed in this paper. First, the road area is extracted from a focused SAR image based on radar vision. Second, after stationary clutter suppression in the range-Doppler domain, a moving target is detected and located in the image domain via the watershed method. The target's position on the road as well as its radial velocity can be determined according to the target's offset distance and traffic rules. Furthermore, the target's azimuth velocity is estimated based on the road slope obtained via polynomial fitting. Compared with the traditional algorithms, the proposed method can effectively cope with slowly moving targets partly submerged in a stationary clutter spectrum. In addition, the proposed method can be easily extended to a multi-channel system to further improve the performance of clutter suppression and motion estimation. Finally, the results of numerical experiments are provided to demonstrate the effectiveness of the proposed algorithm.
Human discrimination of visual direction of motion with and without smooth pursuit eye movements
NASA Technical Reports Server (NTRS)
Krukowski, Anton E.; Pirog, Kathleen A.; Beutter, Brent R.; Brooks, Kevin R.; Stone, Leland S.
2003-01-01
It has long been known that ocular pursuit of a moving target has a major influence on its perceived speed (Aubert, 1886; Fleischl, 1882). However, little is known about the effect of smooth pursuit on the perception of target direction. Here we compare the precision of human visual-direction judgments under two oculomotor conditions (pursuit vs. fixation). We also examine the impact of stimulus duration (200 ms vs. 800 ms) and absolute direction (cardinal vs. oblique). Our main finding is that direction discrimination thresholds in the fixation and pursuit conditions are indistinguishable. Furthermore, the two oculomotor conditions showed oblique effects of similar magnitudes. These data suggest that the neural direction signals supporting perception are the same with or without pursuit, despite remarkably different retinal stimulation. During fixation, the stimulus information is restricted to large, purely peripheral retinal motion, while during steady-state pursuit, the stimulus information consists of small, unreliable foveal retinal motion and a large efference-copy signal. A parsimonious explanation of our findings is that the signal limiting the precision of direction judgments is a neural estimate of target motion in head-centered (or world-centered) coordinates (i.e., a combined retinal and eye motion signal) as found in the medial superior temporal area (MST), and not simply an estimate of retinal motion as found in the middle temporal area (MT).
Infantile nystagmus syndrome is associated with inefficiency of goal-directed hand movements.
Liebrand-Schurink, Joyce; Cox, Ralf F A; van Rens, Ger H M B; Cillessen, Antonius H N; Meulenbroek, Ruud G J; Boonstra, F Nienke
2014-12-23
The effect of infantile nystagmus syndrome (INS) on the efficiency of goal-directed hand movements was examined. We recruited 37 children with INS and 65 control subjects with normal vision, aged 4 to 8 years. Participants performed horizontally-oriented, goal-directed cylinder displacements as if they displaced a low-vision aid. The first 10 movements of 20 back-and-forth displacements in a trial were performed between two visually presented target areas, and the second 10 between remembered target locations (not visible). Motor performance was examined in terms of movement time, endpoint accuracy, and a harmonicity index reflecting energetic efficiency. Compared to the control group, the children with INS performed the cylinder displacements more slowly (using more time), less accurately (specifically in small-amplitude movements), and with less harmonic acceleration profiles. Their poor visual acuity proved to correlate with slower and less accurate movements, but did not correlate with harmonicity. When moving between remembered target locations, the performance of children with INS was less accurate than that of the children with normal vision. In both groups, movement speed and harmonicity increased with age to a similar extent. Collectively, the findings suggest that, in addition to the visuospatial homing-in problems associated with the syndrome, INS is associated with inefficiency of goal-directed hand movements. ( http://www.trialregister.nl number, NTR2380.). Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
Vision-based control for flight relative to dynamic environments
NASA Astrophysics Data System (ADS)
Causey, Ryan Scott
The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
Radar study of seabirds and bats on windward Hawai'i
Reynolds, M.H.; Cooper, B.A.; Day, Robert H.
1997-01-01
Modified marine surveillance radar was used to study the presence/ absence, abundance, and flight activity of four nocturnal species: Hawaiian darkrumped petrel [Pterodroma phaeopygia sandwichensis (Ridgeway)], Newell's shearwater [Puffinus auricularis newelli (Henshaw)], Band-rumped storm-petrel [Oceanodroma castro (Harcourt)], and Hawaiian hoary bat (Lasiurus cinereus semotus Sanborn & Crespo). Hawaiian seabirds were recorded flying to or from inland nesting colonies at seven sampling sites on the windward side of the island of Hawai'i. In total, 527 radar "targets" identified as petrel or shearwater-type on the basis of speed, flight behavior, and radar signal strength were observed during eight nights of sampling. Mean movement rates (targets per minute) for seabird targets were 0.1, 0.1, 0.3, 3.8, 0.9, and 2.2 for surveys at Kahakai, Kapoho, Mauna Loa, Pali Uli, Pu'ulena Crater, and Waipi'o Valley, respectively. Two percent of the petrel and shearwater-type targets detected on radar were confirmed visually or aurally. Flight paths for seabird targets showed strong directionality at six sampling sites. Mean flight speed for seabird targets (n = 524) was 61 km/hr for all survey areas. Peak detection times for seabirds were from 0430 to 0530 hours for birds flying to sea and 2000 to 2150 hours for birds returning to colonies. Most inland, low-elevation sampling sites could not be surveyed reliably for seabirds during the evening activity periods because of radar interference from insects and rapidly flying bats. At those inland sites predawn sampling was the best time for using radar to detect Hawaiian seabirds moving seaward. Hawaiian hoary bats were recorded at eight sampling sites. Eighty-six to 89 radar targets that exhibited erratic flight behavior were identified as "batlike" targets; 17% of these batlike radar targets were confirmed visually. Band-rumped storm-petrels were not identified during our surveys.
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
Fukushima, Kikuro; Fukushima, Junko; Barnes, Graham R
2017-05-01
Parkinson's disease (PD) is a progressive neurodegenerative disorder of the basal ganglia. Most PD patients suffer from somatomotor and oculomotor disorders. The oculomotor system facilitates obtaining accurate information from the visual world. If a target moves slowly in the fronto-parallel plane, tracking eye movements occur that consist primarily of smooth-pursuit interspersed with corrective saccades. Efficient smooth-pursuit requires appropriate target selection and predictive compensation for inherent processing delays. Although pursuit impairment, e.g. as latency prolongation or low gain (eye velocity/target velocity), is well known in PD, normal aging alone results in such changes. In this article, we first briefly review some basic features of smooth-pursuit, then review recent results showing the specific nature of impaired pursuit in PD using a cue-dependent memory-based smooth-pursuit task. This task was initially used for monkeys to separate two major components of prediction (image-motion direction memory and movement preparation), and neural correlates were examined in major pursuit pathways. Most PD patients possessed normal cue-information memory but extra-retinal mechanisms for pursuit preparation and execution were dysfunctional. A minority of PD patients had abnormal cue-information memory or difficulty in understanding the task. Some PD patients with normal cue-information memory changed strategy to initiate smooth tracking. Strategy changes were also observed to compensate for impaired pursuit during whole body rotation while the target moved with the head. We discuss PD pathophysiology by comparing eye movement task results with neuropsychological and motor symptom evaluations of individual patients and further with monkey results, and suggest possible neural circuits for these functions/dysfunctions.
Data, Analysis, and Visualization | Computational Science | NREL
Data, Analysis, and Visualization Data, Analysis, and Visualization Data management, data analysis . At NREL, our data management, data analysis, and scientific visualization capabilities help move the approaches to image analysis and computer vision. Data Management and Big Data Systems, software, and tools
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
Remapping of border ownership in the visual cortex.
O'Herron, Philip; von der Heydt, Rüdiger
2013-01-30
We see objects as having continuity although the retinal image changes frequently. How such continuity is achieved is hard to understand, because neurons in the visual cortex have small receptive fields that are fixed on the retina, which means that a different set of neurons is activated every time the eyes move. Neurons in areas V1 and V2 of the visual cortex signal the local features that are currently in their receptive fields and do not show "remapping" when the image moves. However, subsets of neurons in these areas also carry information about global aspects, such as figure-ground organization. Here we performed experiments to find out whether figure-ground organization is remapped. We recorded single neurons in macaque V1 and V2 in which figure-ground organization is represented by assignment of contours to regions (border ownership). We found previously that border-ownership signals persist when a figure edge is switched to an ambiguous edge by removing the context. We now used this paradigm to see whether border ownership transfers when the ambiguous edge is moved across the retina. In the new position, the edge activated a different set of neurons at a different location in cortex. We found that border ownership was transferred to the newly activated neurons. The transfer occurred whether the edge was moved by a saccade or by moving the visual display. Thus, although the contours are coded in retinal coordinates, their assignment to objects is maintained across movements of the retinal image.
Moving Stimuli Facilitate Synchronization But Not Temporal Perception
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419
Moving Stimuli Facilitate Synchronization But Not Temporal Perception.
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.
Words, shape, visual search and visual working memory in 3-year-old children.
Vales, Catarina; Smith, Linda B
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
A Bilateral Advantage for Storage in Visual Working Memory
ERIC Educational Resources Information Center
Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward
2010-01-01
Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…
Visual Attention to Movement and Color in Children with Cortical Visual Impairment
ERIC Educational Resources Information Center
Cohen-Maitre, Stacey Ann; Haerich, Paul
2005-01-01
This study investigated the ability of color and motion to elicit and maintain visual attention in a sample of children with cortical visual impairment (CVI). It found that colorful and moving objects may be used to engage children with CVI, increase their motivation to use their residual vision, and promote visual learning.
Research on measurement method of optical camouflage effect of moving object
NASA Astrophysics Data System (ADS)
Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen
2016-10-01
Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.
Control Program for an Optical-Calibration Robot
NASA Technical Reports Server (NTRS)
Johnston, Albert
2005-01-01
A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.
Modulation of visually evoked movement responses in moving virtual environments.
Reed-Jones, Rebecca J; Vallis, Lori Ann
2009-01-01
Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.
Both hand position and movement direction modulate visual attention
Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.
2013-01-01
The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
Vestibular stimulation interferes with the dynamics of an internal representation of gravity.
De Sá Teixeira, Nuno Alexandre; Hecht, Heiko; Diaz Artiles, Ana; Seyedmadani, Kimia; Sherwood, David P; Young, Laurence R
2017-11-01
The remembered vanishing location of a moving target has been found to be displaced downward in the direction of gravity (representational gravity) and more so with increasing retention intervals, suggesting that the visual spatial updating recruits an internal model of gravity. Despite being consistently linked with gravity, few inquiries have been made about the role of vestibular information in these trends. Previous experiments with static tilting of observers' bodies suggest that under conflicting cues between the idiotropic vector and vestibular signals, the dynamic drift in memory is reduced to a constant displacement along the body's main axis. The present experiment aims to replicate and extend these outcomes while keeping the observers' bodies unchanged in relation to physical gravity by varying the gravito-inertial acceleration using a short-radius centrifuge. Observers were shown, while accelerated to varying degrees, targets moving along several directions and were required to indicate the perceived vanishing location after a variable interval. Increases of the gravito-inertial force (up to 1.4G), orthogonal to the idiotropic vector, did not affect the direction of representational gravity, but significantly disrupted its time course. The role and functioning of an internal model of gravity for spatial perception and orientation are discussed in light of the results.
Effects of a Moving Distractor Object on Time-to-Contact Judgments
ERIC Educational Resources Information Center
Oberfeld, Daniel; Hecht, Heiko
2008-01-01
The effects of moving task-irrelevant objects on time-to-contact (TTC) judgments were examined in 5 experiments. Observers viewed a directly approaching target in the presence of a distractor object moving in parallel with the target. In Experiments 1 to 4, observers decided whether the target would have collided with them earlier or later than a…
Radar Imaging for Moving Targets
2009-06-01
MOVING TARGETS by Teo Beng Koon William June 2009 Thesis Advisor: Brett H. Borden Second Reader: Donald L. Walters THIS PAGE...Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project...TITLE AND SUBTITLE Radar Imaging for Moving Targets 6. AUTHOR(S) Teo Beng Koon William 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S
Study of target and non-target interplay in spatial attention task.
Sweeti; Joshi, Deepak; Panigrahi, B K; Anand, Sneh; Santhosh, Jayasree
2018-02-01
Selective visual attention is the ability to selectively pay attention to the targets while inhibiting the distractors. This paper aims to study the targets and non-targets interplay in spatial attention task while subject attends to the target object present in one visual hemifield and ignores the distractor present in another visual hemifield. This paper performs the averaged evoked response potential (ERP) analysis and time-frequency analysis. ERP analysis agrees to the left hemisphere superiority over late potentials for the targets present in right visual hemifield. Time-frequency analysis performed suggests two parameters i.e. event-related spectral perturbation (ERSP) and inter-trial coherence (ITC). These parameters show the same properties for the target present in either of the visual hemifields but show the difference while comparing the activity corresponding to the targets and non-targets. In this way, this study helps to visualise the difference between targets present in the left and right visual hemifields and, also the targets and non-targets present in the left and right visual hemifields. These results could be utilised to monitor subjects' performance in brain-computer interface (BCI) and neurorehabilitation.
Functional specialization and generalization for grouping of stimuli based on colour and motion
Zeki, Semir; Stutters, Jonathan
2013-01-01
This study was undertaken to learn whether the principle of functional specialization that is evident at the level of the prestriate visual cortex extends to areas that are involved in grouping visual stimuli according to attribute, and specifically according to colour and motion. Subjects viewed, in an fMRI scanner, visual stimuli composed of moving dots, which could be either coloured or achromatic; in some stimuli the moving coloured dots were randomly distributed or moved in random directions; in others, some of the moving dots were grouped together according to colour or to direction of motion, with the number of groupings varying from 1 to 3. Increased activation was observed in area V4 in response to colour grouping and in V5 in response to motion grouping while both groupings led to activity in separate though contiguous compartments within the intraparietal cortex. The activity in all the above areas was parametrically related to the number of groupings, as was the prominent activity in Crus I of the cerebellum where the activity resulting from the two types of grouping overlapped. This suggests (a) that, the specialized visual areas of the prestriate cortex have functions beyond the processing of visual signals according to attribute, namely that of grouping signals according to colour (V4) or motion (V5); (b) that the functional separation evident in visual cortical areas devoted to motion and colour, respectively, is maintained at the level of parietal cortex, at least as far as grouping according to attribute is concerned; and (c) that, by contrast, this grouping-related functional segregation is not maintained at the level of the cerebellum. PMID:23415950
Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms
Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe
2017-01-01
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233
[The Performance Analysis for Lighting Sources in Highway Tunnel Based on Visual Function].
Yang, Yong; Han, Wen-yuan; Yan, Ming; Jiang, Hai-feng; Zhu, Li-wei
2015-10-01
Under the condition of mesopic vision, the spectral luminous efficiency function is shown as a series of curves. Its peak wavelength and intensity are affected by light spectrum, background brightness and other aspects. The impact of light source to lighting visibility could not be carried out via a single optical parametric characterization. The reaction time of visual cognition is regard as evaluating indexes in this experiment. Under the condition of different speed and luminous environment, testing visual cognition based on vision function method. The light sources include high pressure sodium, electrodeless fluorescent lamp and white LED with three kinds of color temperature (the range of color temperature is from 1 958 to 5 537 K). The background brightness value is used for basic section of highway tunnel illumination and general outdoor illumination, its range is between 1 and 5 cd x m(-)2. All values are in the scope of mesopic vision. Test results show that: under the same condition of speed and luminance, the reaction time of visual cognition that corresponding to high color temperature of light source is shorter than it corresponding to low color temperature; the reaction time corresponding to visual target in high speed is shorter than it in low speed. At the end moment, however, the visual angle of target in observer's visual field that corresponding to low speed was larger than it corresponding to high speed. Based on MOVE model, calculating the equivalent luminance of human mesopic vision, which is on condition of different emission spectrum and background brightness that formed by test lighting sources. Compared with photopic vision result, the standard deviation (CV) of time-reaction curve corresponding to equivalent brightness of mesopic vision is smaller. Under the condition of mesopic vision, the discrepancy between equivalent brightness of different lighting source and photopic vision, that is one of the main reasons for causing the discrepancy of visual recognition. The emission spectrum peak of GaN chip is approximate to the wave length peak of efficiency function in photopic vision. The lighting visual effect of write LED in high color temperature is better than it in low color temperature and electrodeless fluorescent lamp. The lighting visual effect of high pressure sodium is weak. Because of its peak value is around the Na+ characteristic spectra.
Control of a 7-DOF Robotic Arm System With an SSVEP-Based BCI.
Chen, Xiaogang; Zhao, Bing; Wang, Yijun; Xu, Shengpu; Gao, Xiaorong
2018-04-12
Although robot technology has been successfully used to empower people who suffer from motor disabilities to increase their interaction with their physical environment, it remains a challenge for individuals with severe motor impairment, who do not have the motor control ability to move robots or prosthetic devices by manual control. In this study, to mitigate this issue, a noninvasive brain-computer interface (BCI)-based robotic arm control system using gaze based steady-state visual evoked potential (SSVEP) was designed and implemented using a portable wireless electroencephalogram (EEG) system. A 15-target SSVEP-based BCI using a filter bank canonical correlation analysis (FBCCA) method allowed users to directly control the robotic arm without system calibration. The online results from 12 healthy subjects indicated that a command for the proposed brain-controlled robot system could be selected from 15 possible choices in 4[Formula: see text]s (i.e. 2[Formula: see text]s for visual stimulation and 2[Formula: see text]s for gaze shifting) with an average accuracy of 92.78%, resulting in a 15 commands/min transfer rate. Furthermore, all subjects (even naive users) were able to successfully complete the entire move-grasp-lift task without user training. These results demonstrated an SSVEP-based BCI could provide accurate and efficient high-level control of a robotic arm, showing the feasibility of a BCI-based robotic arm control system for hand-assistance.
Janssen, Malou; Ischebeck, Britta K; de Vries, Jurryt; Kleinrensink, Gert-Jan; Frens, Maarten A; van der Geest, Jos N
2015-10-01
This is a cross-sectional study. The purpose of this study is to support and extend previous observations on oculomotor disturbances in patients with neck pain and whiplash-associated disorders (WADs) by systematically investigating the effect of static neck torsion on smooth pursuit in response to both predictably and unpredictably moving targets using video-oculography. Previous studies showed that in patients with neck complaints, for instance due to WAD, extreme static neck torsion deteriorates smooth pursuit eye movements in response to predictably moving targets compared with healthy controls. Eye movements in response to a smoothly moving target were recorded with video-oculography in a heterogeneous group of 55 patients with neck pain (including 11 patients with WAD) and 20 healthy controls. Smooth pursuit performance was determined while the trunk was fixed in 7 static rotations relative to the head (from 45° to the left to 45° to right), using both predictably and unpredictably moving stimuli. Patients had reduced smooth pursuit gains and smooth pursuit gain decreased due to neck torsion. Healthy controls showed higher gains for predictably moving targets compared with unpredictably moving targets, whereas patients with neck pain had similar gains in response to both types of target movements. In 11 patients with WAD, increased neck torsion decreased smooth pursuit performance, but only for predictably moving targets. Smooth pursuit of patients with neck pain is affected. The previously reported WAD-specific decline in smooth pursuit due to increased neck torsion seems to be modulated by the predictability of the movement of the target. The observed oculomotor disturbances in patients with WAD are therefore unlikely to be induced by impaired neck proprioception alone. 3.
Representational momentum and Michotte's (1946/1963) "launching effect" paradigm.
Hubbard, T L; Blessum, J A; Ruppel, S E
2001-01-01
In A. Michotte's (1946/1963) launching effect, a moving launcher contacts a stationary target, and then the launcher becomes stationary and the target begins to move. In this experiment, observers viewed modifications of a launching effect display, and displacement in memory for the location of targets was measured. Forward displacement of targets in launching effect displays was decreased relative to that of targets (a) that were presented in isolation and either moved at a constant fast or slow velocity or decelerated or (b) that moved in a direction orthogonal to previous motion of the launcher. Possible explanations involving a deceleration of motion or landmark attraction effects were ruled out. Displacement patterns were consistent with naive impetus theory and the hypothesis that observers believed impetus from the launcher was imparted to the target and then dissipated.
Trivedi, Chintan A; Bollmann, Johann H
2013-01-01
Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.
Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model
Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal
2016-01-01
In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769
Statistical Regularities Attract Attention when Task-Relevant.
Alamia, Andrea; Zénon, Alexandre
2016-01-01
Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e., the effect of color predictability on reaction times (RTs), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the two colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task.
The representational dynamics of remembered projectile locations.
De Sá Teixeira, Nuno Alexandre; Hecht, Heiko; Oliveira, Armando Mónica
2013-12-01
When people are instructed to locate the vanishing location of a moving target, systematic errors forward in the direction of motion (M-displacement) and downward in the direction of gravity (O-displacement) are found. These phenomena came to be linked with the notion that physical invariants are embedded in the dynamic representations generated by the perceptual system. We explore the nature of these invariants that determine the representational mechanics of projectiles. By manipulating the retention intervals between the target's disappearance and the participant's responses, while measuring both M- and O-displacements, we were able to uncover a representational analogue of the trajectory of a projectile. The outcomes of three experiments revealed that the shape of this trajectory is discontinuous. Although the horizontal component of such trajectory can be accounted for by perceptual and oculomotor factors, its vertical component cannot. Taken together, the outcomes support an internalization of gravity in the visual representation of projectiles.
Apollo Docking with the LEM Target
2012-09-07
Originally the Rendezvous was used by the astronauts preparing for Gemini missions. The Rendezvous Docking Simulator was then modified and used to develop docking techniques for the Apollo program. This picture shows a later configuration of the Apollo docking with the LEM target. A.W. Vogeley described the simulator as follows: The Rendezvous Docking Simulator and also the Lunar Landing Research Facility are both rather large moving-base simulators. It should be noted, however, that neither was built primarily because of its motion characteristics. The main reason they were built was to provide a realistic visual scene. A secondary reason was that they would provide correct angular motion cues (important in control of vehicle short-period motions) even though the linear acceleration cues would be incorrect. -- Published in A.W. Vogeley, Piloted Space-Flight Simulation at Langley Research Center, Paper presented at the American Society of Mechanical Engineers, 1966 Winter Meeting, New York, NY, November 27 - December 1, 1966.
Thinking of God moves attention.
Chasteen, Alison L; Burdzy, Donna C; Pratt, Jay
2010-01-01
The concepts of God and Devil are well known across many cultures and religions, and often involve spatial metaphors, but it is not well known if our mental representations of these concepts affect visual cognition. To examine if exposure to divine concepts produces shifts of attention, participants completed a target detection task in which they were first presented with God- and Devil-related words. We found faster RTs when targets appeared at compatible locations with the concepts of God (up/right locations) or Devil (down/left locations), and also found that these results do not vary by participants' religiosity. These results indicate that metaphors associated with the divine have strong spatial components that can produce shifts of attention, and add to the growing evidence for an extremely robust connection between internal spatial representations and where attention is allocated in the external environment. 2009 Elsevier Ltd. All rights reserved.
Perception of 3-D location based on vision, touch, and extended touch
Giudice, Nicholas A.; Klatzky, Roberta L.; Bennett, Christopher R.; Loomis, Jack M.
2012-01-01
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated.This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate. PMID:23070234
Verification of target motion effects on SAR imagery using the Gotcha GMTI challenge dataset
NASA Astrophysics Data System (ADS)
Hack, Dan E.; Saville, Michael A.
2010-04-01
This paper investigates the relationship between a ground moving target's kinematic state and its SAR image. While effects such as cross-range offset, defocus, and smearing appear well understood, their derivations in the literature typically employ simplifications of the radar/target geometry and assume point scattering targets. This study adopts a geometrical model for understanding target motion effects in SAR imagery, termed the target migration path, and focuses on experimental verification of predicted motion effects using both simulated and empirical datasets based on the Gotcha GMTI challenge dataset. Specifically, moving target imagery is generated from three data sources: first, simulated phase history for a moving point target; second, simulated phase history for a moving vehicle derived from a simulated Mazda MPV X-band signature; and third, empirical phase history from the Gotcha GMTI challenge dataset. Both simulated target trajectories match the truth GPS target position history from the Gotcha GMTI challenge dataset, allowing direct comparison between all three imagery sets and the predicted target migration path. This paper concludes with a discussion of the parallels between the target migration path and the measurement model within a Kalman filtering framework, followed by conclusions.
Hummingbirds control hovering flight by stabilizing visual motion.
Goller, Benjamin; Altshuler, Douglas L
2014-12-23
Relatively little is known about how sensory information is used for controlling flight in birds. A powerful method is to immerse an animal in a dynamic virtual reality environment to examine behavioral responses. Here, we investigated the role of vision during free-flight hovering in hummingbirds to determine how optic flow--image movement across the retina--is used to control body position. We filmed hummingbirds hovering in front of a projection screen with the prediction that projecting moving patterns would disrupt hovering stability but stationary patterns would allow the hummingbird to stabilize position. When hovering in the presence of moving gratings and spirals, hummingbirds lost positional stability and responded to the specific orientation of the moving visual stimulus. There was no loss of stability with stationary versions of the same stimulus patterns. When exposed to a single stimulus many times or to a weakened stimulus that combined a moving spiral with a stationary checkerboard, the response to looming motion declined. However, even minimal visual motion was sufficient to cause a loss of positional stability despite prominent stationary features. Collectively, these experiments demonstrate that hummingbirds control hovering position by stabilizing motions in their visual field. The high sensitivity and persistence of this disruptive response is surprising, given that the hummingbird brain is highly specialized for sensory processing and spatial mapping, providing other potential mechanisms for controlling position.
Nakashima, Ryoichi; Shioiri, Satoshi
2014-01-01
Why do we frequently fixate an object of interest presented peripherally by moving our head as well as our eyes, even when we are capable of fixating the object with an eye movement alone (lateral viewing)? Studies of eye-head coordination for gaze shifts have suggested that the degree of eye-head coupling could be determined by an unconscious weighing of the motor costs and benefits of executing a head movement. The present study investigated visual perceptual effects of head direction as an additional factor impacting on a cost-benefit organization of eye-head control. Three experiments using visual search tasks were conducted, manipulating eye direction relative to head orientation (front or lateral viewing). Results show that lateral viewing increased the time required to detect a target in a search for the letter T among letter L distractors, a serial attentive search task, but not in a search for T among letter O distractors, a parallel preattentive search task (Experiment 1). The interference could not be attributed to either a deleterious effect of lateral gaze on the accuracy of saccadic eye movements, nor to potentially problematic optical effects of binocular lateral viewing, because effect of head directions was obtained under conditions in which the task was accomplished without saccades (Experiment 2), and during monocular viewing (Experiment 3). These results suggest that a difference between the head and eye directions interferes with visual processing, and that the interference can be explained by the modulation of attention by the relative positions of the eyes and head (or head direction). PMID:24647634
Huth, Véronique; Sanchez, Yann; Brusque, Corinne
2015-01-01
Phone use while driving has become one of the priority issues in road safety, given that it may lead to decreased situation awareness and deteriorated driving performance. It has been suggested that drivers can regulate their exposure to secondary tasks and seek for compatibility of phone use and driving. Phone use strategies include the choice of driving situations with low demands and interruptions of the interaction when the context changes. Traffic light situations at urban intersections imply both a temptation to use the phone while waiting at the red traffic light and a potential threat due to the incompatibility of phone use and driving when the traffic light turns green. These two situations were targeted in a roadside observation study, with the aim to investigate the existence of a phone use strategy at the red traffic light and to test its effectiveness. N=124 phone users and a corresponding control group of non-users were observed. Strategic phone use behaviour was detected for visual-manual interactions, which are more likely to be initiated at the red traffic light and tend to be stopped before the vehicle moves off, while calls are less likely to be limited to the red traffic light situation. As an indicator of impaired situation awareness, delayed start was associated to phone use and in particular to visual-manual interactions, whether phone use was interrupted before moving off or not. Traffic light situations do not seem to allow effective application of phone use strategies, although drivers attempt to do so for the most demanding phone use mode. The underlying factors of phone use need to be studied so as to reduce the temptation of phone use and facilitate exposure regulation strategies. Copyright © 2014. Published by Elsevier Ltd.
76 FR 27898 - Registration and Recordation Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-13
... to reflect a reorganization that has moved the Recordation function from the Visual Arts and... function from the Visual Arts and Recordation Division of the Registration and Recordation Program to the... Visual Arts Division of the Registration and Recordation Program, has been renamed the Recordation...
Going, Going, Gone: Localizing Abrupt Offsets of Moving Objects
ERIC Educational Resources Information Center
Maus, Gerrit W.; Nijhawan, Romi
2009-01-01
When a moving object abruptly disappears, this profoundly influences its localization by the visual system. In Experiment 1, 2 aligned objects moved across the screen, and 1 of them abruptly disappeared. Observers reported seeing the objects misaligned at the time of the offset, with the continuing object leading. Experiment 2 showed that the…
Changes to online control and eye-hand coordination with healthy ageing.
O'Rielly, Jessica L; Ma-Wyatt, Anna
2018-06-01
Goal directed movements are typically accompanied by a saccade to the target location. Online control plays an important part in correction of a reach, especially if the target or goal of the reach moves during the reach. While there are notable changes to visual processing and motor control with healthy ageing, there is limited evidence about how eye-hand coordination during online updating changes with healthy ageing. We sought to quantify differences between older and younger people for eye-hand coordination during online updating. Participants completed a double step reaching task implemented under time pressure. The target perturbation could occur 200, 400 and 600 ms into a reach. We measured eye position and hand position throughout the trials to investigate changes to saccade latency, movement latency, movement time, reach characteristics and eye-hand latency and accuracy. Both groups were able to update their reach in response to a target perturbation that occurred at 200 or 400 ms into the reach. All participants demonstrated incomplete online updating for the 600 ms perturbation time. Saccade latencies, measured from the first target presentation, were generally longer for older participants. Older participants had significantly increased movement times but there was no significant difference between groups for touch accuracy. We speculate that the longer movement times enable the use of new visual information about the target location for online updating towards the end of the movement. Interestingly, older participants also produced a greater proportion of secondary saccades within the target perturbation condition and had generally shorter eye-hand latencies. This is perhaps a compensatory mechanism as there was no significant group effect on final saccade accuracy. Overall, the pattern of results suggests that online control of movements may be qualitatively different in older participants. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Architecture-Based Self-Adaptation for Moving Target Defense
2014-08-01
using stochastic multiplayer games to verify the the behavior of a variety of MTD scenarios, from uninformed to predictive-reactive. This work is... multiplayer games to verify the the behavior of a variety of MTD scenarios, from uninformed to predictive-reactive. This work is applied in the context...for Moving Target . . . . . . . . . . . . . . 28 5 Multiplayer Games for Moving Target Defense 31 5.1 Stochastic Game Analysis for Proactive Self
Remaud, Anthony; Thuong-Cong, Cécile; Bilodeau, Martin
2016-01-01
Normal aging results in alterations in the visual, vestibular and somtaosensory systems, which in turn modify the control of balance. Muscle fatigue may exacerbate these age-related changes in sensory and motor functions, and also increase the attentional demands associated with dynamic postural control. The purpose of this study was to investigate the effect of aging on dynamic postural control and posture-related attentional demands before and after a plantar flexor fatigue protocol. Participants (young adults: n = 15; healthy seniors: n = 13) performed a dynamic postural task along the antero-posterior (AP) and the medio-lateral (ML) axes, with and without the addition of a simple reaction time (RT) task. The dynamic postural task consisted in following a moving circle on a computer screen with the representation of the center of pressure (COP). This protocol was repeated before and after a fatigue task where ankle plantar flexor muscles were targeted. The mean COP-target distance and the mean COP velocity were calculated for each trial. Cross-correlation analyses between the COP and target displacements were also performed. RTs were recorded during dual-task trials. Results showed that while young adults adopted an anticipatory control mode to move their COP as close as possible to the target center, seniors adopted a reactive control mode, lagging behind the target center. This resulted in longer COP-target distance and higher COP velocity in the latter group. Concurrently, RT increased more in seniors when switching from static stance to dynamic postural conditions, suggesting potential alterations in the central nervous system (CNS) functions. Finally, plantar flexor muscle fatigue and dual-tasking had only minor effects on dynamic postural control of both young adults and seniors. Future studies should investigate why the fatigue-induced changes in quiet standing postural control do not seem to transfer to dynamic balance tasks. PMID:26834626
Does the perception of moving eyes trigger reflexive visual orienting in autism?
Swettenham, John; Condie, Samantha; Campbell, Ruth; Milne, Elizabeth; Coleman, Mike
2003-01-01
Does movement of the eyes in one or another direction function as an automatic attentional cue to a location of interest? Two experiments explored the directional movement of the eyes in a full face for speed of detection of an aftercoming location target in young people with autism and in control participants. Our aim was to investigate whether a low-level perceptual impairment underlies the delay in gaze following characteristic of autism. The participants' task was to detect a target appearing on the left or right of the screen either 100 ms or 800 ms after a face cue appeared with eyes averting to the left or right. Despite instructions to ignore eye-movement in the face cue, people with autism and control adolescents were quicker to detect targets that had been preceded by an eye movement cue congruent with target location compared with targets preceded by an incongruent eye movement cue. The attention shifts are thought to be reflexive because the cue was to be ignored, and because the effect was found even when cue-target duration was short (100 ms). Because (experiment two) the effect persisted even when the face was inverted, it would seem that the direction of movement of eyes can provide a powerful (involuntary) cue to a location. PMID:12639330
NASA Technical Reports Server (NTRS)
Trejo, Leonard J.; Matthews, Bryan; Rosipal, Roman
2005-01-01
We have developed and tested two EEG-based brain-computer interfaces (BCI) for users to control a cursor on a computer display. Our system uses an adaptive algorithm, based on kernel partial least squares classification (KPLS), to associate patterns in multichannel EEG frequency spectra with cursor controls. Our first BCI, Target Practice, is a system for one-dimensional device control, in which participants use biofeedback to learn voluntary control of their EEG spectra. Target Practice uses a KF LS classifier to map power spectra of 30-electrode EEG signals to rightward or leftward position of a moving cursor on a computer display. Three subjects learned to control motion of a cursor on a video display in multiple blocks of 60 trials over periods of up to six weeks. The best subject s average skill in correct selection of the cursor direction grew from 58% to 88% after 13 training sessions. Target Practice also implements online control of two artifact sources: a) removal of ocular artifact by linear subtraction of wavelet-smoothed vertical and horizontal EOG signals, b) control of muscle artifact by inhibition of BCI training during periods of relatively high power in the 40-64 Hz band. The second BCI, Think Pointer, is a system for two-dimensional cursor control. Steady-state visual evoked potentials (SSVEP) are triggered by four flickering checkerboard stimuli located in narrow strips at each edge of the display. The user attends to one of the four beacons to initiate motion in the desired direction. The SSVEP signals are recorded from eight electrodes located over the occipital region. A KPLS classifier is individually calibrated to map multichannel frequency bands of the SSVEP signals to right-left or up-down motion of a cursor on a computer display. The display stops moving when the user attends to a central fixation point. As for Target Practice, Think Pointer also implements wavelet-based online removal of ocular artifact; however, in Think Pointer muscle artifact is controlled via adaptive normalization of the SSVEP. Training of the classifier requires about three minutes. We have tested our system in real-time operation in three human subjects. Across subjects and sessions, control accuracy ranged from 80% to 100% correct with lags of 1-5 seconds for movement initiation and turning.
Language-Mediated Visual Orienting Behavior in Low and High Literates
Huettig, Falk; Singh, Niharika; Mishra, Ramesh Kumar
2011-01-01
The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts. PMID:22059083
Flash trajectory imaging of target 3D motion
NASA Astrophysics Data System (ADS)
Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang
2011-03-01
We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.
NASA Technical Reports Server (NTRS)
Tonkay, Gregory
1990-01-01
The following separate topics are addressed: (1) improving a robotic tracking system; and (2) providing insights into orbiter position calibration for radiator inspection. The objective of the tracking system project was to provide the capability to track moving targets more accurately by adjusting parameters in the control system and implementing a predictive algorithm. A computer model was developed to emulate the tracking system. Using this model as a test bed, a self-tuning algorithm was developed to tune the system gains. The model yielded important findings concerning factors that affect the gains. The self-tuning algorithms will provide the concepts to write a program to automatically tune the gains in the real system. The section concerning orbiter position calibration provides a comparison to previous work that had been performed for plant growth. It provided the conceptualized routines required to visually determine the orbiter position and orientation. Furthermore, it identified the types of information which are required to flow between the robot controller and the vision system.
Robust sampling of decision information during perceptual choice
Vandormael, Hildward; Herce Castañón, Santiago; Balaguer, Jan; Li, Vickie; Summerfield, Christopher
2017-01-01
Humans move their eyes to gather information about the visual world. However, saccadic sampling has largely been explored in paradigms that involve searching for a lone target in a cluttered array or natural scene. Here, we investigated the policy that humans use to overtly sample information in a perceptual decision task that required information from across multiple spatial locations to be combined. Participants viewed a spatial array of numbers and judged whether the average was greater or smaller than a reference value. Participants preferentially sampled items that were less diagnostic of the correct answer (“inlying” elements; that is, elements closer to the reference value). This preference to sample inlying items was linked to decisions, enhancing the tendency to give more weight to inlying elements in the final choice (“robust averaging”). These findings contrast with a large body of evidence indicating that gaze is directed preferentially to deviant information during natural scene viewing and visual search, and suggest that humans may sample information “robustly” with their eyes during perceptual decision-making. PMID:28223519
Visualization of membrane protein crystals in lipid cubic phase using X-ray imaging
Warren, Anna J.; Armour, Wes; Axford, Danny; Basham, Mark; Connolley, Thomas; Hall, David R.; Horrell, Sam; McAuley, Katherine E.; Mykhaylyk, Vitaliy; Wagner, Armin; Evans, Gwyndaf
2013-01-01
The focus in macromolecular crystallography is moving towards even more challenging target proteins that often crystallize on much smaller scales and are frequently mounted in opaque or highly refractive materials. It is therefore essential that X-ray beamline technology develops in parallel to accommodate such difficult samples. In this paper, the use of X-ray microradiography and microtomography is reported as a tool for crystal visualization, location and characterization on the macromolecular crystallography beamlines at the Diamond Light Source. The technique is particularly useful for microcrystals and for crystals mounted in opaque materials such as lipid cubic phase. X-ray diffraction raster scanning can be used in combination with radiography to allow informed decision-making at the beamline prior to diffraction data collection. It is demonstrated that the X-ray dose required for a full tomography measurement is similar to that for a diffraction grid-scan, but for sample location and shape estimation alone just a few radiographic projections may be required. PMID:23793151
Visualization of membrane protein crystals in lipid cubic phase using X-ray imaging.
Warren, Anna J; Armour, Wes; Axford, Danny; Basham, Mark; Connolley, Thomas; Hall, David R; Horrell, Sam; McAuley, Katherine E; Mykhaylyk, Vitaliy; Wagner, Armin; Evans, Gwyndaf
2013-07-01
The focus in macromolecular crystallography is moving towards even more challenging target proteins that often crystallize on much smaller scales and are frequently mounted in opaque or highly refractive materials. It is therefore essential that X-ray beamline technology develops in parallel to accommodate such difficult samples. In this paper, the use of X-ray microradiography and microtomography is reported as a tool for crystal visualization, location and characterization on the macromolecular crystallography beamlines at the Diamond Light Source. The technique is particularly useful for microcrystals and for crystals mounted in opaque materials such as lipid cubic phase. X-ray diffraction raster scanning can be used in combination with radiography to allow informed decision-making at the beamline prior to diffraction data collection. It is demonstrated that the X-ray dose required for a full tomography measurement is similar to that for a diffraction grid-scan, but for sample location and shape estimation alone just a few radiographic projections may be required.
NASA Astrophysics Data System (ADS)
Brattico, Elvira; Brattico, Pauli; Vuust, Peter
2017-07-01
In their target article published in this journal issue, Pelowski et al. [1] address the question of how humans experience, and respond to, visual art. They propose a multi-layered model of the representations and processes involved in assessing visual art objects that, furthermore, involves both bottom-up and top-down elements. Their model provides predictions for seven different outcomes of human aesthetic experience, based on few distinct features (schema congruence, self-relevance, and coping necessity), and connects the underlying processing stages to ;specific correlates of the brain; (a similar attempt was previously done for music by [2-4]). In doing this, the model aims to account for the (often profound) experience of an individual viewer in front of an art object.
Eye movements: The past 25 years
Kowler, Eileen
2011-01-01
This article reviews the past 25 of research on eye movements (1986–2011). Emphasis is on three oculomotor behaviors: gaze control, smooth pursuit and saccades, and on their interactions with vision. Focus over the past 25 years has remained on the fundamental and classical questions: What are the mechanisms that keep gaze stable with either stationary or moving targets? How does the motion of the image on the retina affect vision? Where do we look – and why – when performing a complex task? How can the world appear clear and stable despite continual movements of the eyes? The past 25 years of investigation of these questions has seen progress and transformations at all levels due to new approaches (behavioral, neural and theoretical) aimed at studying how eye movements cope with real-world visual and cognitive demands. The work has led to a better understanding of how prediction, learning and attention work with sensory signals to contribute to the effective operation of eye movements in visually rich environments. PMID:21237189
Deictic primitives for general purpose navigation
NASA Technical Reports Server (NTRS)
Crismann, Jill D.
1994-01-01
A visually-based deictic primative used as an elementary command set for general purpose navigation was investigated. It was shown that a simple 'follow your eyes' scenario is sufficient for tracking a moving target. Limitations of velocity, acceleration, and modeling of the response of the mechanical systems were enforced. Realistic paths of the robots were produced during the simulation. Scientists could remotely command a planetary rover to go to a particular rock formation that may be interesting. Similarly an expert at plant maintenance could obtain diagnostic information remotely by using deictic primitives on a mobile are used in the deictic primitives, we could imagine that the exact same control software could be used for all of these applications.
Reduced Distractibility in a Remote Culture
de Fockert, Jan W.; Caparos, Serge; Linnell, Karina J.; Davidoff, Jules
2011-01-01
Background In visual processing, there are marked cultural differences in the tendency to adopt either a global or local processing style. A remote culture (the Himba) has recently been reported to have a greater local bias in visual processing than Westerners. Here we give the first evidence that a greater, and remarkable, attentional selectivity provides the basis for this local bias. Methodology/Principal Findings In Experiment 1, Eriksen-type flanker interference was measured in the Himba and in Western controls. In both groups, responses to the direction of a task-relevant target arrow were affected by the compatibility of task-irrelevant distractor arrows. However, the Himba showed a marked reduction in overall flanker interference compared to Westerners. The smaller interference effect in the Himba occurred despite their overall slower performance than Westerners, and was evident even at a low level of perceptual load of the displays. In Experiment 2, the attentional selectivity of the Himba was further demonstrated by showing that their attention was not even captured by a moving singleton distractor. Conclusions/Significance We argue that the reduced distractibility in the Himba is clearly consistent with their tendency to prioritize the analysis of local details in visual processing. PMID:22046275
2009 Combat Vehicles Conference (BRIEFING CHARTS)
2009-10-14
Strategy to Field 531 Systems Targeting Under Armor and FS3 Integration on A3 BFIST on the Move Our #1 Priority is to Support Units Engaged in O C i...Armored Knight Program • Targeting Under Armor /On the Move effort underway to • The M1200 Armored Knight provides increase survivability of...increased survivability Sustainment Survivability 32 10/13/2009 BFIST Program Overview • Targeting Under Armor /On the Move effort underway to
How visual cues for when to listen aid selective auditory attention.
Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G
2012-06-01
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
Zabierek, Kristina C; Gabor, Caitlin R
2016-09-01
Prey may use multiple sensory channels to detect predators, whose cues may differ in altered sensory environments, such as turbid conditions. Depending on the environment, prey may use cues in an additive/complementary manner or in a compensatory manner. First, to determine whether the purely aquatic Barton Springs salamander, Eurycea sosorum, show an antipredator response to visual cues, we examined their activity when exposed to either visual cues of a predatory fish (Lepomis cyanellus) or a non-predatory fish (Etheostoma lepidum). Salamanders decreased activity in response to predator visual cues only. Then, we examined the antipredator response of these salamanders to all matched and mismatched combinations of chemical and visual cues of the same predatory and non-predatory fish in clear and low turbidity conditions. Salamanders decreased activity in response to predator chemical cues matched with predator visual cues or mismatched with non-predator visual cues. Salamanders also increased latency to first move to predator chemical cues mismatched with non-predator visual cues. Salamanders decreased activity and increased latency to first move more in clear as opposed to turbid conditions in all treatment combinations. Our results indicate that salamanders under all conditions and treatments preferentially rely on chemical cues to determine antipredator behavior, although visual cues are potentially utilized in conjunction for latency to first move. Our results also have potential conservation implications, as decreased antipredator behavior was seen in turbid conditions. These results reveal complexity of antipredator behavior in response to multiple cues under different environmental conditions, which is especially important when considering endangered species. Copyright © 2016 Elsevier B.V. All rights reserved.
Perception of Visual Speed While Moving
ERIC Educational Resources Information Center
Durgin, Frank H.; Gigone, Krista; Scott, Rebecca
2005-01-01
During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…
Rethinking Reader Response with Fifth Graders' Semiotic Interpretations
ERIC Educational Resources Information Center
Barone, Diane; Barone, Rebecca
2017-01-01
Fifth graders interpreted the book "Doll Bones" by Holly Black through visual representations from the beginning to the end of the book. Each visual representation was analyzed to determine how students responded. Most frequently, they moved to inferential ways of understanding. Students often visually interpreted emotional plot elements…
Gao, Han; Li, Jingwen
2014-06-19
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.
Gao, Han; Li, Jingwen
2014-01-01
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640
NASA Astrophysics Data System (ADS)
Page, Douglas; Owirka, Gregory; Nichols, Howard; Scarborough, Steven
2014-06-01
We describe techniques for improving ground moving target indication (GMTI) performance in multi-channel synthetic aperture radar (SAR) systems. Our approach employs a combination of moving reference processing (MRP) to compensate for defocus of moving target SAR responses and space-time adaptive processing (STAP) to mitigate the effects of strong clutter interference. Using simulated moving target and clutter returns, we demonstrate focusing of the target return using MRP, and discuss the effect of MRP on the clutter response. We also describe formation of adaptive degrees of freedom (DOFs) for STAP filtering of MRP processed data. For the simulated moving target in clutter example, we demonstrate improvement in the signal to interference plus noise (SINR) loss compared to more standard algorithm configurations. In addition to MRP and STAP, the use of tracker feedback, false alarm mitigation, and parameter estimation techniques are also described. A change detection approach for reducing false alarms from clutter discretes is outlined, and processing of a measured data coherent processing interval (CPI) from a continuously orbiting platform is described. The results demonstrate detection and geolocation of a high-value target under track. The endoclutter target is not clearly visible in single-channel SAR chips centered on the GMTI track prediction. Detections are compared to truth data before and after geolocation using measured angle of arrival (AOA).
DVA as a Diagnostic Test for Vestibulo-Ocular Reflex Function
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Appelbaum, Meghan
2010-01-01
The vestibulo-ocular reflex (VOR) stabilizes vision on earth-fixed targets by eliciting eyes movements in response to changes in head position. How well the eyes perform this task can be functionally measured by the dynamic visual acuity (DVA) test. We designed a passive, horizontal DVA test to specifically study the acuity and reaction time when looking in different target locations. Visual acuity was compared among 12 subjects using a standard Landolt C wall chart, a computerized static (no rotation) acuity test and dynamic acuity test while oscillating at 0.8 Hz (+/-60 deg/s). In addition, five trials with yaw oscillation randomly presented a visual target in one of nine different locations with the size and presentation duration of the visual target varying across trials. The results showed a significant difference between the static and dynamic threshold acuities as well as a significant difference between the visual targets presented in the horizontal plane versus those in the vertical plane when comparing accuracy of vision and reaction time of the response. Visual acuity increased proportional to the size of the visual target and increased between 150 and 300 msec duration. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of rotation. This DVA test could be used as a functional diagnostic test for visual-vestibular and neuro-cognitive impairments by assessing both accuracy and reaction time to acquire visual targets.
Haptic guidance of overt visual attention.
List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru
2014-11-01
Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.
Feature-aided multiple target tracking in the image plane
NASA Astrophysics Data System (ADS)
Brown, Andrew P.; Sullivan, Kevin J.; Miller, David J.
2006-05-01
Vast quantities of EO and IR data are collected on airborne platforms (manned and unmanned) and terrestrial platforms (including fixed installations, e.g., at street intersections), and can be exploited to aid in the global war on terrorism. However, intelligent preprocessing is required to enable operator efficiency and to provide commanders with actionable target information. To this end, we have developed an image plane tracker which automatically detects and tracks multiple targets in image sequences using both motion and feature information. The effects of platform and camera motion are compensated via image registration, and a novel change detection algorithm is applied for accurate moving target detection. The contiguous pixel blob on each moving target is segmented for use in target feature extraction and model learning. Feature-based target location measurements are used for tracking through move-stop-move maneuvers, close target spacing, and occlusion. Effective clutter suppression is achieved using joint probabilistic data association (JPDA), and confirmed target tracks are indicated for further processing or operator review. In this paper we describe the algorithms implemented in the image plane tracker and present performance results obtained with video clips from the DARPA VIVID program data collection and from a miniature unmanned aerial vehicle (UAV) flight.
Dent, Kevin
2014-05-01
Dent, Humphreys, and Braithwaite (2011) showed substantial costs to search when a moving target shared its color with a group of ignored static distractors. The present study further explored the conditions under which such costs to performance occur. Experiment 1 tested whether the negative color-sharing effect was specific to cases in which search showed a highly serial pattern. The results showed that the negative color-sharing effect persisted in the case of a target defined as a conjunction of movement and form, even when search was highly efficient. In Experiment 2, the ease with which participants could find an odd-colored target amongst a moving group was examined. Participants searched for a moving target amongst moving and stationary distractors. In Experiment 2A, participants performed a highly serial search through a group of similarly shaped moving letters. Performance was much slower when the target shared its color with a set of ignored static distractors. The exact same displays were used in Experiment 2B; however, participants now responded "present" for targets that shared the color of the static distractors. The same targets that had previously been difficult to find were now found efficiently. The results are interpreted in a flexible framework for attentional control. Targets that are linked with irrelevant distractors by color tend to be ignored. However, this cost can be overridden by top-down control settings.
The Force of Appearance: Gamma Movement, Naive Impetus, and Representational Momentum
ERIC Educational Resources Information Center
Hubbard, Timothy L.; Ruppel, Susan E.; Courtney, Jon R.
2005-01-01
If a moving stimulus (i.e., launcher) contacts a stationary target that subsequently begins to move, observers attribute motion of the target to the launcher (Michotte, 1946/1963). In experiments reported here, a stationary launcher adjacent to the target appeared or vanished and displacement in memory for the position of the target was measured.…
Disruption of State Estimation in the Human Lateral Cerebellum
Miall, R. Chris; Christensen, Lars O. D; Cain, Owen; Stanley, James
2007-01-01
The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate. PMID:18044990
Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature
Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat
2014-01-01
It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185
Response to reflected-force feedback to fingers in teleoperations
NASA Technical Reports Server (NTRS)
Sutter, P. H.; Iatridis, J. C.; Thakor, N. V.
1989-01-01
Reflected-force feedback is an important aspect of teleoperations. The objective is to determine the ability of the human operator to respond to that force. Telerobotics operation is simulated by computer control of a motor-driven device with capabilities for programmable force feedback and force measurement. A computer-controlled motor drive is developed that provides forces against the fingers as well as (angular) position control. A load cell moves in a circular arc as it is pushed by a finger and measures reaction forces on the finger. The force exerted by the finger on the load cell and the angular position are digitized and recorded as a function of time by the computer. Flexure forces of the index, long and ring fingers of the human hand in opposition to the motor driven load cell are investigated. Results of the following experiments are presented: (1) Exertion of maximum finger force as a function of angle; (2) Exertion of target finger force against a computer controlled force; and (3) Test of the ability to move to a target force against a force that is a function of position. Averaged over ten individuals, the maximum force that could be exerted by the index or long finger is about 50 Newtons, while that of the ring finger is about 40 Newtons. From the tests of the ability of a subject to exert a target force, it was concluded that reflected-force feedback can be achieved with the direct kinesthetic perception of force without the use of tactile or visual clues.
Study of multi-functional precision optical measuring system for large scale equipment
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi
2017-10-01
The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.
Shock-like haemodynamic responses induced in the primary visual cortex by moving visual stimuli
Robinson, P. A.
2016-01-01
It is shown that recently discovered haemodynamic waves can form shock-like fronts when driven by stimuli that excite the cortex in a patch that moves faster than the haemodynamic wave velocity. If stimuli are chosen in order to induce shock-like behaviour, the resulting blood oxygen level-dependent (BOLD) response is enhanced, thereby improving the signal to noise ratio of measurements made with functional magnetic resonance imaging. A spatio-temporal haemodynamic model is extended to calculate the BOLD response and determine the main properties of waves induced by moving stimuli. From this, the optimal conditions for stimulating shock-like responses are determined, and ways of inducing these responses in experiments are demonstrated in a pilot study. PMID:27974572
Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.
Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen
2009-11-15
The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level.
An analog retina model for detecting dim moving objects against a bright moving background
NASA Technical Reports Server (NTRS)
Searfus, R. M.; Colvin, M. E.; Eeckman, F. H.; Teeters, J. L.; Axelrod, T. S.
1991-01-01
We are interested in applications that require the ability to track a dim target against a bright, moving background. Since the target signal will be less than or comparable to the variations in the background signal intensity, sophisticated techniques must be employed to detect the target. We present an analog retina model that adapts to the motion of the background in order to enhance targets that have a velocity difference with respect to the background. Computer simulation results and our preliminary concept of an analog 'Z' focal plane implementation are also presented.
On the radar cross section (RCS) prediction of vehicles moving on the ground
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabihi, Ahmad
2014-12-10
As readers should be aware, Radar Cross Section depends on the factors such as: Wave frequency and polarization, Target dimension, angle of ray incidence, Target’s material and covering, Type of radar system as monostatic or bistatic, space in which contains target and propagating waves, and etc. Having moved or stationed in vehicles can be effective in RCS values. Here, we investigate effective factors in RCS of moving targets on the ground or sea. Image theory in electromagnetic applies to be taken into account RCS of a target over the ground or sea.
Patient-specific port placement for laparoscopic surgery using atlas-based registration
NASA Astrophysics Data System (ADS)
Enquobahrie, Andinet; Shivaprabhu, Vikas; Aylward, Stephen; Finet, Julien; Cleary, Kevin; Alterovitz, Ron
2013-03-01
Laparoscopic surgery is a minimally invasive surgical approach, in which abdominal surgical procedures are performed through trocars via small incisions. Patients benefit by reduced postoperative pain, shortened hospital stays, improved cosmetic results, and faster recovery times. Optimal port placement can improve surgeon dexterity and avoid the need to move the trocars, which would cause unnecessary trauma to the patient. We are building an intuitive open source visualization system to help surgeons identify ports. Our methodology is based on an intuitive port placement visualization module and atlas-based registration algorithm to transfer port locations to individual patients. The methodology follows three steps:1) Use a port placement visualization module to manually place ports in an abdominal organ atlas. This step generates port-augmented abdominal atlas. This is done only once for a given patient population. 2) Register the atlas data with the patient CT data, to transfer the prescribed ports to the individual patient 3) Review and adjust the transferred port locations using the port placement visualization module. Tool maneuverability and target reachability can be tested using the visualization system. Our methodology would decrease the amount of physician input necessary to optimize port placement for each patient case. In a follow up work, we plan to use the transferred ports as starting point for further optimization of the port locations by formulating a cost function that will take into account factors such as tool dexterity and likelihood of collision between instruments.
Predicting Moves-on-Stills for Comic Art Using Viewer Gaze Data.
Jain, Eakta; Sheikh, Yaser; Hodgins, Jessica
2016-01-01
Comic art consists of a sequence of panels of different shapes and sizes that visually communicate the narrative to the reader. The move-on-stills technique allows such still images to be retargeted for digital displays via camera moves. Today, moves-on-stills can be created by software applications given user-provided parameters for each desired camera move. The proposed algorithm uses viewer gaze as input to computationally predict camera move parameters. The authors demonstrate their algorithm on various comic book panels and evaluate its performance by comparing their results with a professional DVD.
Evolution of attention mechanisms for early visual processing
NASA Astrophysics Data System (ADS)
Müller, Thomas; Knoll, Alois
2011-03-01
Early visual processing as a method to speed up computations on visual input data has long been discussed in the computer vision community. The general target of a such approaches is to filter nonrelevant information from the costly higher-level visual processing algorithms. By insertion of this additional filter layer the overall approach can be speeded up without actually changing the visual processing methodology. Being inspired by the layered architecture of the human visual processing apparatus, several approaches for early visual processing have been recently proposed. Most promising in this field is the extraction of a saliency map to determine regions of current attention in the visual field. Such saliency can be computed in a bottom-up manner, i.e. the theory claims that static regions of attention emerge from a certain color footprint, and dynamic regions of attention emerge from connected blobs of textures moving in a uniform way in the visual field. Top-down saliency effects are either unconscious through inherent mechanisms like inhibition-of-return, i.e. within a period of time the attention level paid to a certain region automatically decreases if the properties of that region do not change, or volitional through cognitive feedback, e.g. if an object moves consistently in the visual field. These bottom-up and top-down saliency effects have been implemented and evaluated in a previous computer vision system for the project JAST. In this paper an extension applying evolutionary processes is proposed. The prior vision system utilized multiple threads to analyze the regions of attention delivered from the early processing mechanism. Here, in addition, multiple saliency units are used to produce these regions of attention. All of these saliency units have different parameter-sets. The idea is to let the population of saliency units create regions of attention, then evaluate the results with cognitive feedback and finally apply the genetic mechanism: mutation and cloning of the best performers and extinction of the worst performers considering computation of regions of attention. A fitness function can be derived by evaluating, whether relevant objects are found in the regions created. It can be seen from various experiments, that the approach significantly speeds up visual processing, especially regarding robust ealtime object recognition, compared to an approach not using saliency based preprocessing. Furthermore, the evolutionary algorithm improves the overall performance of the preprocessing system in terms of quality, as the system automatically and autonomously tunes the saliency parameters. The computational overhead produced by periodical clone/delete/mutation operations can be handled well within the realtime constraints of the experimental computer vision system. Nevertheless, limitations apply whenever the visual field does not contain any significant saliency information for some time, but the population still tries to tune the parameters - overfitting avoids generalization in this case and the evolutionary process may be reset by manual intervention.
Murthy, Aditya; Ray, Supriya; Shorter, Stephanie M; Schall, Jeffrey D; Thompson, Kirk G
2009-05-01
The dynamics of visual selection and saccade preparation by the frontal eye field was investigated in macaque monkeys performing a search-step task combining the classic double-step saccade task with visual search. Reward was earned for producing a saccade to a color singleton. On random trials the target and one distractor swapped locations before the saccade and monkeys were rewarded for shifting gaze to the new singleton location. A race model accounts for the probabilities and latencies of saccades to the initial and final singleton locations and provides a measure of the duration of a covert compensation process-target-step reaction time. When the target stepped out of a movement field, noncompensated saccades to the original location were produced when movement-related activity grew rapidly to a threshold. Compensated saccades to the final location were produced when the growth of the original movement-related activity was interrupted within target-step reaction time and was replaced by activation of other neurons producing the compensated saccade. When the target stepped into a receptive field, visual neurons selected the new target location regardless of the monkeys' response. When the target stepped out of a receptive field most visual neurons maintained the representation of the original target location, but a minority of visual neurons showed reduced activity. Chronometric analyses of the neural responses to the target step revealed that the modulation of visually responsive neurons and movement-related neurons occurred early enough to shift attention and saccade preparation from the old to the new target location. These findings indicate that visual activity in the frontal eye field signals the location of targets for orienting, whereas movement-related activity instantiates saccade preparation.
Visuo-oculomotor skills related to the visual demands of sporting environments.
Ceyte, Hadrien; Lion, Alexis; Caudron, Sébastien; Perrin, Philippe; Gauchard, Gérome C
2017-01-01
The aim of this study was to assess the visuo-oculomotor skills of gaze orientation in selected sport activities relative to visual demands of the sporting environment. Both temporal and spatial demands of the sporting environment were investigated: The latency and accuracy of horizontal saccades and the gain of the horizontal smooth pursuit of the sporting environment were investigated in 16 fencers, 19 tennis players, 12 gymnasts, 9 swimmers and 18 sedentary participants. For the saccade test, two sequences were tested: In the fixed sequence, participants knew in advance the time interval between each target, as well as the direction and the amplitude of its reappearance; in the Freyss sequence however, the spatial changes of the target (direction and amplitude) were known in advance by participants but the time interval between each target was unknown. For the smooth-pursuit test, participants were instructed to smoothly track a target moving in a predictable sinusoidal, horizontal way without corrective ocular saccades, nor via anticipation or head movements. The results showed no significant differences between specificities of selected sporting activities via the saccade latency (although shorter than in non-athletes), contrary to saccade accuracy and the gain of smooth pursuit. Higher saccade accuracy was observed overall in fencers compared to non-athletes and all other sportsmen with the exception of tennis players. In the smooth-pursuit task, only tennis players presented a significantly higher gain compared to non-athletes and gymnasts. These sport-specific characteristics of the visuo-oculomotor skills are discussed with regard to the different cognitive skills such as attentional allocation and cue utilization ability as well as with regard to the difference in motor preparation.
Visualizing Energy on Target: Molecular Dynamics Simulations
2017-12-01
ARL-TR-8234 ● DEC 2017 US Army Research Laboratory Visualizing Energy on Target: Molecular Dynamics Simulations by DeCarlos E...return it to the originator. ARL-TR-8234● DEC 2017 US Army Research Laboratory Visualizing Energy on Target: Molecular Dynamics...REPORT TYPE Technical Report 3. DATES COVERED (From - To) 1 October 2015–30 September 2016 4. TITLE AND SUBTITLE Visualizing Energy on Target
Gravity and perceptual stability during translational head movement on earth and in microgravity.
Jaekl, P; Zikovitz, D C; Jenkin, M R; Jenkin, H L; Zacher, J E; Harris, L R
2005-01-01
We measured the amount of visual movement judged consistent with translational head movement under normal and microgravity conditions. Subjects wore a virtual reality helmet in which the ratio of the movement of the world to the movement of the head (visual gain) was variable. Using the method of adjustment under normal gravity 10 subjects adjusted the visual gain until the visual world appeared stable during head movements that were either parallel or orthogonal to gravity. Using the method of constant stimuli under normal gravity, seven subjects moved their heads and judged whether the virtual world appeared to move "with" or "against" their movement for several visual gains. One subject repeated the constant stimuli judgements in microgravity during parabolic flight. The accuracy of judgements appeared unaffected by the direction or absence of gravity. Only the variability appeared affected by the absence of gravity. These results are discussed in relation to discomfort during head movements in microgravity. c2005 Elsevier Ltd. All rights reserved.
Clinical implications of parallel visual pathways.
Bassi, C J; Lehmkuhle, S
1990-02-01
Visual information travels from the retina to visual cortical areas along at least two parallel pathways. In this paper, anatomical and physiological evidence is presented to demonstrate the existence of, and trace these two pathways throughout the visual systems of the cat, primate, and human. Physiological and behavioral experiments are discussed which establish that these two pathways are differentially sensitive to stimuli that vary in spatial and temporal frequency. One pathway (M-pathway) is more sensitive to coarse visual form that is modulated or moving at fast rates, whereas the other pathway (P-pathway) is more sensitive to spatial detail that is stationary or moving at slow rates. This difference between the M- and P-pathways is related to some spatial and temporal effects observed in humans. Furthermore, evidence is presented that certain diseases selectively comprise the functioning of M- or P-pathways (i.e., glaucoma, Alzheimer's disease, and anisometropic amblyopia), and some of the spatial and temporal deficits observed in these patients are presented within the context of the dysfunction of the M- or P-pathway.
VAST Challenge 2016: Streaming Visual Analytics
2016-10-25
understand rapidly evolving situations. To support such tasks, visual analytics solutions must move well beyond systems that simply provide real-time...received. Mini-Challenge 1: Design Challenge Mini-Challenge 1 focused on systems to support security and operational analytics at the Euybia...Challenge 1 was to solicit novel approaches for streaming visual analytics that push the boundaries for what constitutes a visual analytics system , and to
Schwegmann, Alexander; Lindemann, Jens P.; Egelhaaf, Martin
2014-01-01
Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs). It is the key result of our analysis that the absolute EMD responses, i.e., the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way. PMID:25136314
Muiños, Mónica; Ballesteros, Soledad
2015-08-01
A major topic of current research in aging has been to investigate ways to promote healthy aging and neuroplasticity in order to counteract perceptual and cognitive declines. The aim of the present study was to investigate the benefits of intensive, sustained judo and karate martial arts training in young and older athletes and nonathletes of the same age for attenuating age-related dynamic visual acuity (DVA) decline. As a target, we used a moving stimulus similar to a Landolt ring that moved horizontally, vertically, or obliquely across the screen at three possible contrasts and three different speeds. The results indicated that (1) athletes had better DVA than nonathletes; (2) the older adult groups showed a larger oblique effect than the younger groups, regardless of whether or not they practiced a martial art; and (3) age modulated the results of sport under the high-speed condition: The DVA of young karate athletes was superior to that of nonathletes, while both judo and karate older athletes showed better DVA than did sedentary older adults. These findings suggest that in older adults, the practice of a martial art in general, rather than the practice of a particular type of martial art, is the crucial thing. We concluded that the sustained practice of a martial art such as judo or karate attenuates the decline of DVA, suggesting neuroplasticity in the aging human brain.
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
Eye Movements Reveal How Task Difficulty Moulds Visual Search
ERIC Educational Resources Information Center
Young, Angela H.; Hulleman, Johan
2013-01-01
In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…
Helland, Magne; Horgen, Gunnar; Kvikstad, Tor Martin; Garthus, Tore; Aarås, Arne
2008-01-01
This study investigated the effect of moving from single-occupancy offices to a landscape environment. Thirty-two visual display unit (VDU) operators reported no significant change in visual discomfort. Lighting conditions and glare reported subjectively showed no significant correlation with visual discomfort. Experience of pain was found to reduce subjectively rated work capacity during VDU tasks. The correlation between visual discomfort and reduced work capacity for single-occupancy offices was rs=.88 (p=.000) and for office landscape rs=.82 (p=.000). Eye blink rate during habitual VDU work was recorded for 12 operators randomly selected from the 32 participants in the office landscape. A marked drop in eye blink rate during VDU work was found compared to eye blink rate during easy conversation. There were no significant changes in pain intensity in the neck, shoulder, forearm, wrist/hand, back or headache (.24
Impact of Target Distance, Target Size, and Visual Acuity on the Video Head Impulse Test.
Judge, Paul D; Rodriguez, Amanda I; Barin, Kamran; Janky, Kristen L
2018-05-01
The video head impulse test (vHIT) assesses the vestibulo-ocular reflex. Few have evaluated whether environmental factors or visual acuity influence the vHIT. The purpose of this study was to evaluate the influence of target distance, target size, and visual acuity on vHIT outcomes. Thirty-eight normal controls and 8 subjects with vestibular loss (VL) participated. vHIT was completed at 3 distances and with 3 target sizes. Normal controls were subdivided on the basis of visual acuity. Corrective saccade frequency, corrective saccade amplitude, and gain were tabulated. In the normal control group, there were no significant effects of target size or visual acuity for any vHIT outcome parameters; however, gain increased as target distance decreased. The VL group demonstrated higher corrective saccade frequency and amplitude and lower gain as compared with controls. In conclusion, decreasing target distance increases gain for normal controls but not subjects with VL. Preliminarily, visual acuity does not affect vHIT outcomes.
NASA Astrophysics Data System (ADS)
Kang, Ziho
This dissertation is divided into four parts: 1) Development of effective methods for comparing visual scanning paths (or scanpaths) for a dynamic task of multiple moving targets, 2) application of the methods to compare the scanpaths of experts and novices for a conflict detection task of multiple aircraft on radar screen, 3) a post-hoc analysis of other eye movement characteristics of experts and novices, and 4) finding out whether the scanpaths of experts can be used to teach the novices. In order to compare experts' and novices' scanpaths, two methods are developed. The first proposed method is the matrix comparisons using the Mantel test. The second proposed method is the maximum transition-based agglomerative hierarchical clustering (MTAHC) where comparisons of multi-level visual groupings are held out. The matrix comparison method was useful for a small number of targets during the preliminary experiment, but turned out to be inapplicable to a realistic case when tens of aircraft were presented on screen; however, MTAHC was effective with large number of aircraft on screen. The experiments with experts and novices on the aircraft conflict detection task showed that their scanpaths are different. The MTAHC result was able to explicitly show how experts visually grouped multiple aircraft based on similar altitudes while novices tended to group them based on convergence. Also, the MTAHC results showed that novices paid much attention to the converging aircraft groups even if they are safely separated by altitude; therefore, less attention was given to the actual conflicting pairs resulting in low correct conflict detection rates. Since the analysis showed the scanpath differences, experts' scanpaths were shown to novices in order to find out its effectiveness. The scanpath treatment group showed indications that they changed their visual movements from trajectory-based to altitude-based movements. Between the treatment and the non-treatment group, there were no significant differences in terms of number of correct detections; however, the treatment group made significantly fewer false alarms.
Burke, M R; Barnes, G R
2008-12-15
We used passive and active following of a predictable smooth pursuit stimulus in order to establish if predictive eye movement responses are equivalent under both passive and active conditions. The smooth pursuit stimulus was presented in pairs that were either 'predictable' in which both presentations were matched in timing and velocity, or 'randomized' in which each presentation in the pair was varied in both timing and velocity. A visual cue signaled the type of response required from the subject; a green cue indicated the subject should follow both the target presentations (Go-Go), a pink cue indicated that the subject should passively observe the 1st target and follow the 2nd target (NoGo-Go), and finally a green cue with a black cross revealed a randomized (Rnd) trial in which the subject should follow both presentations. The results revealed better prediction in the Go-Go trials than in the NoGo-Go trials, as indicated by higher anticipatory velocity and earlier eye movement onset (latency). We conclude that velocity and timing information stored from passive observation of a moving target is diminished when compared to active following of the target. This study has significant consequences for understanding how visuomotor memory is generated, stored and subsequently released from short-term memory.
Eye Tracking of Occluded Self-Moved Targets: Role of Haptic Feedback and Hand-Target Dynamics.
Danion, Frederic; Mathew, James; Flanagan, J Randall
2017-01-01
Previous studies on smooth pursuit eye movements have shown that humans can continue to track the position of their hand, or a target controlled by the hand, after it is occluded, thereby demonstrating that arm motor commands contribute to the prediction of target motion driving pursuit eye movements. Here, we investigated this predictive mechanism by manipulating both the complexity of the hand-target mapping and the provision of haptic feedback. Two hand-target mappings were used, either a rigid (simple) one in which hand and target motion matched perfectly or a nonrigid (complex) one in which the target behaved as a mass attached to the hand by means of a spring. Target animation was obtained by asking participants to oscillate a lightweight robotic device that provided (or not) haptic feedback consistent with the target dynamics. Results showed that as long as 7 s after target occlusion, smooth pursuit continued to be the main contributor to total eye displacement (∼60%). However, the accuracy of eye-tracking varied substantially across experimental conditions. In general, eye-tracking was less accurate under the nonrigid mapping, as reflected by higher positional and velocity errors. Interestingly, haptic feedback helped to reduce the detrimental effects of target occlusion when participants used the nonrigid mapping, but not when they used the rigid one. Overall, we conclude that the ability to maintain smooth pursuit in the absence of visual information can extend to complex hand-target mappings, but the provision of haptic feedback is critical for the maintenance of accurate eye-tracking performance.
Eye Tracking of Occluded Self-Moved Targets: Role of Haptic Feedback and Hand-Target Dynamics
Mathew, James
2017-01-01
Abstract Previous studies on smooth pursuit eye movements have shown that humans can continue to track the position of their hand, or a target controlled by the hand, after it is occluded, thereby demonstrating that arm motor commands contribute to the prediction of target motion driving pursuit eye movements. Here, we investigated this predictive mechanism by manipulating both the complexity of the hand-target mapping and the provision of haptic feedback. Two hand-target mappings were used, either a rigid (simple) one in which hand and target motion matched perfectly or a nonrigid (complex) one in which the target behaved as a mass attached to the hand by means of a spring. Target animation was obtained by asking participants to oscillate a lightweight robotic device that provided (or not) haptic feedback consistent with the target dynamics. Results showed that as long as 7 s after target occlusion, smooth pursuit continued to be the main contributor to total eye displacement (∼60%). However, the accuracy of eye-tracking varied substantially across experimental conditions. In general, eye-tracking was less accurate under the nonrigid mapping, as reflected by higher positional and velocity errors. Interestingly, haptic feedback helped to reduce the detrimental effects of target occlusion when participants used the nonrigid mapping, but not when they used the rigid one. Overall, we conclude that the ability to maintain smooth pursuit in the absence of visual information can extend to complex hand-target mappings, but the provision of haptic feedback is critical for the maintenance of accurate eye-tracking performance. PMID:28680964
Visual search performance among persons with schizophrenia as a function of target eccentricity.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2010-03-01
The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved
Target-locking acquisition with real-time confocal (TARC) microscopy.
Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A
2007-07-09
We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
The effects of task difficulty on visual search strategy in virtual 3D displays.
Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa
2013-08-28
Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.
Some characteristics of optokinetic eye-movement patterns : a comparative study.
DOT National Transportation Integrated Search
1970-07-01
Long-associated with transportation ('railroad nystagmus'), optokinetic (OPK) nystagmus is an eye-movement reaction which occurs when a series of moving objects crosses the visual field or when an observer moves past a series of objects. Similar cont...
Visual control of prey-capture flight in dragonflies.
Olberg, Robert M
2012-04-01
Interacting with a moving object poses a computational problem for an animal's nervous system. This problem has been elegantly solved by the dragonfly, a formidable visual predator on flying insects. The dragonfly computes an interception flight trajectory and steers to maintain it during its prey-pursuit flight. This review summarizes current knowledge about pursuit behavior and neurons thought to control interception in the dragonfly. When understood, this system has the potential for explaining how a small group of neurons can control complex interactions with moving objects. Copyright © 2011 Elsevier Ltd. All rights reserved.
Stimulus-dependent modulation of visual neglect in a touch-screen cancellation task.
Keller, Ingo; Volkening, Katharina; Garbacenkaite, Ruta
2015-05-01
Patients with left-sided neglect frequently show omissions and repetitive behavior on cancellation tests. Using a touch-screen-based cancellation task, we tested how visual feedback and distracters influence the number of omissions and perseverations. Eighteen patients with left-sided visual neglect and 18 healthy controls performed four different cancellation tasks on an iPad touch screen: no feedback (the display did not change during the task), visual feedback (touched targets changed their color from black to green), visual feedback with distracters (20 distracters were evenly embedded in the display; detected targets changed their color from black to green), vanishing targets (touched targets disappeared from the screen). Except for the condition with vanishing targets, neglect patients had significantly more omissions and perseverations than healthy controls in the remaining three subtests. Both conditions providing feedback by changing the target color showed the highest number of omissions. Erasure of targets nearly diminished omissions completely. The highest rate of perseverations was observed in the no-feedback condition. The implementation of distracters led to a moderate number of perseverations. Visual feedback without distracters and vanishing targets abolished perseverations nearly completely. Visual feedback and the presence of distracters aggravated hemispatial neglect. This finding is compatible with impaired disengagement from the ipsilesional side as an important factor of visual neglect. Improvement of cancellation behavior with vanishing targets could have therapeutic implications. (c) 2015 APA, all rights reserved).
2016-01-01
Particle therapy of moving targets is still a great challenge. The motion of organs situated in the thorax and abdomen strongly affects the precision of proton and carbon ion radiotherapy. The motion is responsible for not only the dislocation of the tumour but also the alterations in the internal density along the beam path, which influence the range of particle beams. Furthermore, in case of pencil beam scanning, there is an interference between the target movement and dynamic beam delivery. This review presents the strategies for tumour motion monitoring and moving target irradiation in the context of hadron therapy. Methods enabling the direct determination of tumour position (fluoroscopic imaging of implanted radio-opaque fiducial markers, electromagnetic detection of inserted transponders and ultrasonic tumour localization systems) are presented. Attention is also drawn to the techniques which use external surrogate motion for an indirect estimation of target displacement during irradiation. The role of respiratory-correlated CT [four-dimensional CT (4DCT)] in the determination of motion pattern prior to the particle treatment is also considered. An essential part of the article is the review of the main approaches to moving target irradiation in hadron therapy: gating, rescanning (repainting), gated rescanning and tumour tracking. The advantages, drawbacks and development trends of these methods are discussed. The new accelerators, called “cyclinacs”, are presented, because their application to particle therapy will allow making a breakthrough in the 4D spot scanning treatment of moving organs. PMID:27376637
Detection and identification of human targets in radar data
NASA Astrophysics Data System (ADS)
Gürbüz, Sevgi Z.; Melvin, William L.; Williams, Douglas B.
2007-04-01
Radar offers unique advantages over other sensors, such as visual or seismic sensors, for human target detection. Many situations, especially military applications, prevent the placement of video cameras or implantment seismic sensors in the area being observed, because of security or other threats. However, radar can operate far away from potential targets, and functions during daytime as well as nighttime, in virtually all weather conditions. In this paper, we examine the problem of human target detection and identification using single-channel, airborne, synthetic aperture radar (SAR). Human targets are differentiated from other detected slow-moving targets by analyzing the spectrogram of each potential target. Human spectrograms are unique, and can be used not just to identify targets as human, but also to determine features about the human target being observed, such as size, gender, action, and speed. A 12-point human model, together with kinematic equations of motion for each body part, is used to calculate the expected target return and spectrogram. A MATLAB simulation environment is developed including ground clutter, human and non-human targets for the testing of spectrogram-based detection and identification algorithms. Simulations show that spectrograms have some ability to detect and identify human targets in low noise. An example gender discrimination system correctly detected 83.97% of males and 91.11% of females. The problems and limitations of spectrogram-based methods in high clutter environments are discussed. The SNR loss inherent to spectrogram-based methods is quantified. An alternate detection and identification method that will be used as a basis for future work is proposed.
Physiological importance of RNA and protein mobility in the cell nucleus
2007-01-01
Trafficking of proteins and RNAs is essential for cellular function and homeostasis. While it has long been appreciated that proteins and RNAs move within cells, only recently has it become possible to visualize trafficking events in vivo. Analysis of protein and RNA motion within the cell nucleus have been particularly intriguing as they have revealed an unanticipated degree of dynamics within the organelle. These methods have revealed that the intranuclear trafficking occurs largely by energy-independent mechanisms and is driven by diffusion. RNA molecules and non-DNA binding proteins undergo constrained diffusion, largely limited by the spatial constraint imposed by chromatin, and chromatin binding proteins move by a stop-and-go mechanism where their free diffusion is interrupted by random association with the chromatin fiber. The ability and mode of motion of proteins and RNAs has implications for how they find nuclear targets on chromatin and in nuclear subcompartments and how macromolecular complexes are assembled in vivo. Most importantly, the dynamic nature of proteins and RNAs is emerging as a means to control physiological cellular responses and pathways. PMID:17994245
Visual Sensitivities and Discriminations and Their Roles in Aviation.
1986-03-01
D. Low contrast letter charts in early diabetic retinopathy , octrlar hypertension, glaucoma and Parkinson’s disease. Br J Ophthalmol, 1984, 68, 885...to detect a camouflaged object that was visible only when moving, and compared these data with similar measurements for conventional objects that were...3) Compare visual detection (i.e. visual acquisition) of camouflaged objects whose edges are defined by velocity differences with visual detection
Camouflage, detection and identification of moving targets
Hall, Joanna R.; Cuthill, Innes C.; Baddeley, Roland; Shohet, Adam J.; Scott-Samuel, Nicholas E.
2013-01-01
Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation—detection, identification and capture—in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely ‘break’ camouflage. PMID:23486439
Camouflage, detection and identification of moving targets.
Hall, Joanna R; Cuthill, Innes C; Baddeley, Roland; Shohet, Adam J; Scott-Samuel, Nicholas E
2013-05-07
Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation-detection, identification and capture-in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely 'break' camouflage.
Xiong, Ji; Li, Fangmin; Zhao, Ning; Jiang, Na
2014-04-22
With characteristics of low-cost and easy deployment, the distributed wireless pyroelectric infrared sensor network has attracted extensive interest, which aims to make it an alternate infrared video sensor in thermal biometric applications for tracking and identifying human targets. In these applications, effectively processing signals collected from sensors and extracting the features of different human targets has become crucial. This paper proposes the application of empirical mode decomposition and the Hilbert-Huang transform to extract features of moving human targets both in the time domain and the frequency domain. Moreover, the support vector machine is selected as the classifier. The experimental results demonstrate that by using this method the identification rates of multiple moving human targets are around 90%.
Ground moving target geo-location from monocular camera mounted on a micro air vehicle
NASA Astrophysics Data System (ADS)
Guo, Li; Ang, Haisong; Zheng, Xiangming
2011-08-01
The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the moving target Kalman filter(KF). Experimental results show that our method can instantaneously geo-locate the moving target by operator's single click and can reach 15 meters accuracy for an MAV flying at 200 meters above the ground.
Move with Me: A Parents' Guide to Movement Development for Visually Impaired Babies.
ERIC Educational Resources Information Center
Blind Childrens Center, Los Angeles, CA.
This booklet presents suggestions for parents to promote their visually impaired infant's motor development. It is pointed out that babies with serious visual loss often prefer their world to be constant and familiar and may resist change (including change in position); therefore, it is important that a wide range of movement activities be…
First-Person Visualizations of the Special and General Theory of Relativity
ERIC Educational Resources Information Center
Kraus, U.
2008-01-01
Visualizations that adopt a first-person point of view allow observation and, in the case of interactive simulations, experimentation with relativistic scenes. This paper gives examples of three types of first-person visualizations: watching objects that move at nearly the speed of light, being a high-speed observer looking at a static environment…
NASA Astrophysics Data System (ADS)
Oku, H.; Ogawa, N.; Ishikawa, M.; Hashimoto, K.
2005-03-01
In this article, a micro-organism tracking system using a high-speed vision system is reported. This system two dimensionally tracks a freely swimming micro-organism within the field of an optical microscope by moving a chamber of target micro-organisms based on high-speed visual feedback. The system we developed could track a paramecium using various imaging techniques, including bright-field illumination, dark-field illumination, and differential interference contrast, at magnifications of 5 times and 20 times. A maximum tracking duration of 300s was demonstrated. Also, the system could track an object with a velocity of up to 35 000μm/s (175diameters/s), which is significantly faster than swimming micro-organisms.
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
Distractor Interference during Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Spering, Miriam; Gegenfurtner, Karl R.; Kerzel, Dirk
2006-01-01
When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show…
Sabbah, P; de, Schonen S; Leveque, C; Gay, S; Pfefer, F; Nioche, C; Sarrazin, J L; Barouti, H; Tadie, M; Cordoliani, Y S
2002-01-01
Residual activation of the cortex was investigated in nine patients with complete spinal cord injury between T6 and L1 by functional magnetic resonance imaging (fMRI). Brain activations were recorded under four conditions: (1) a patient attempting to move his toes with flexion-extension, (2) a patient imagining the same movement, (3) passive proprio-somesthesic stimulation of the big toes without visual control, and (4) passive proprio-somesthesic stimulation of the big toes with visual control by the patient. Passive proprio-somesthesic stimulation of the toes generated activation posterior to the central sulcus in the three patients who also showed a somesthesic evoked potential response to somesthesic stimulation. When performed under visual control, activations were observed in two more patients. In all patients, activations were found in the cortical areas involved in motor control (i.e., primary sensorimotor cortex, premotor regions and supplementary motor area [SMA]) during attempts to move or mental imagery of these tasks. It is concluded that even several years after injury with some local cortical reorganization, activation of lower limb cortical networks can be generated either by the attempt to move, the mental evocation of the action, or the visual feedback of a passive proprio-somesthesic stimulation.
Filling in the gaps: Anticipatory control of eye movements in chronic mild traumatic brain injury.
Diwakar, Mithun; Harrington, Deborah L; Maruta, Jun; Ghajar, Jamshid; El-Gabalawy, Fady; Muzzatti, Laura; Corbetta, Maurizio; Huang, Ming-Xiong; Lee, Roland R
2015-01-01
A barrier in the diagnosis of mild traumatic brain injury (mTBI) stems from the lack of measures that are adequately sensitive in detecting mild head injuries. MRI and CT are typically negative in mTBI patients with persistent symptoms of post-concussive syndrome (PCS), and characteristic difficulties in sustaining attention often go undetected on neuropsychological testing, which can be insensitive to momentary lapses in concentration. Conversely, visual tracking strongly depends on sustained attention over time and is impaired in chronic mTBI patients, especially when tracking an occluded target. This finding suggests deficient internal anticipatory control in mTBI, the neural underpinnings of which are poorly understood. The present study investigated the neuronal bases for deficient anticipatory control during visual tracking in 25 chronic mTBI patients with persistent PCS symptoms and 25 healthy control subjects. The task was performed while undergoing magnetoencephalography (MEG), which allowed us to examine whether neural dysfunction associated with anticipatory control deficits was due to altered alpha, beta, and/or gamma activity. Neuropsychological examinations characterized cognition in both groups. During MEG recordings, subjects tracked a predictably moving target that was either continuously visible or randomly occluded (gap condition). MEG source-imaging analyses tested for group differences in alpha, beta, and gamma frequency bands. The results showed executive functioning, information processing speed, and verbal memory deficits in the mTBI group. Visual tracking was impaired in the mTBI group only in the gap condition. Patients showed greater error than controls before and during target occlusion, and were slower to resynchronize with the target when it reappeared. Impaired tracking concurred with abnormal beta activity, which was suppressed in the parietal cortex, especially the right hemisphere, and enhanced in left caudate and frontal-temporal areas. Regional beta-amplitude demonstrated high classification accuracy (92%) compared to eye-tracking (65%) and neuropsychological variables (80%). These findings show that deficient internal anticipatory control in mTBI is associated with altered beta activity, which is remarkably sensitive given the heterogeneity of injuries.
Global Statistical Learning in a Visual Search Task
ERIC Educational Resources Information Center
Jones, John L.; Kaschak, Michael P.
2012-01-01
Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…
Peripheral prism glasses: effects of moving and stationary backgrounds.
Shen, Jieming; Peli, Eli; Bowers, Alex R
2015-04-01
Unilateral peripheral prisms for homonymous hemianopia (HH) expand the visual field through peripheral binocular visual confusion, a stimulus for binocular rivalry that could lead to reduced predominance and partial suppression of the prism image, thereby limiting device functionality. Using natural-scene images and motion videos, we evaluated whether detection was reduced in binocular compared with monocular viewing. Detection rates of nine participants with HH or quadranopia and normal binocularity wearing peripheral prisms were determined for static checkerboard perimetry targets briefly presented in the prism expansion area and the seeing hemifield. Perimetry was conducted under monocular and binocular viewing with targets presented over videos of real-world driving scenes and still frame images derived from those videos. With unilateral prisms, detection rates in the prism expansion area were significantly lower in binocular than in monocular (prism eye) viewing on the motion background (medians, 13 and 58%, respectively, p = 0.008) but not the still frame background (medians, 63 and 68%, p = 0.123). When the stimulus for binocular rivalry was reduced by fitting prisms bilaterally in one HH and one normally sighted subject with simulated HH, prism-area detection rates on the motion background were not significantly different (p > 0.6) in binocular and monocular viewing. Conflicting binocular motion appears to be a stimulus for reduced predominance of the prism image in binocular viewing when using unilateral peripheral prisms. However, the effect was only found for relatively small targets. Further testing is needed to determine the extent to which this phenomenon might affect the functionality of unilateral peripheral prisms in more real-world situations.
Peripheral Prism Glasses: Effects of Moving and Stationary Backgrounds
Shen, Jieming; Peli, Eli; Bowers, Alex R.
2015-01-01
Purpose Unilateral peripheral prisms for homonymous hemianopia (HH) expand the visual field through peripheral binocular visual confusion, a stimulus for binocular rivalry that could lead to reduced predominance (partial local suppression) of the prism image and limit device functionality. Using natural-scene images and motion videos, we evaluated whether detection was reduced in binocular compared to monocular viewing. Methods Detection rates of nine participants with HH or quadranopia and normal binocularity wearing peripheral prisms were determined for static checkerboard perimetry targets briefly presented in the prism expansion area and the seeing hemifield. Perimetry was conducted under monocular and binocular viewing with targets presented over videos of real-world driving scenes and still frame images derived from those videos. Results With unilateral prisms, detection rates in the prism expansion area were significantly lower in binocular than monocular (prism eye) viewing on the motion background (medians 13% and 58%, respectively, p = 0.008), but not the still frame background (63% and 68%, p = 0.123). When the stimulus for binocular rivalry was reduced by fitting prisms bilaterally in 1 HH and 1 normally-sighted subject with simulated HH, prism-area detection rates on the motion background were not significantly different (p > 0.6) in binocular and monocular viewing. Conclusions Conflicting binocular motion appears to be a stimulus for reduced predominance of the prism image in binocular viewing when using unilateral peripheral prisms. However, the effect was only found for relatively small targets. Further testing is needed to determine the extent to which this phenomenon might affect the functionality of unilateral peripheral prisms in more real-world situations. PMID:25785533
A tactile display for international space station (ISS) extravehicular activity (EVA).
Rochlis, J L; Newman, D J
2000-06-01
A tactile display to increase an astronaut's situational awareness during an extravehicular activity (EVA) has been developed and ground tested. The Tactor Locator System (TLS) is a non-intrusive, intuitive display capable of conveying position and velocity information via a vibrotactile stimulus applied to the subject's neck and torso. In the Earth's 1 G environment, perception of position and velocity is determined by the body's individual sensory systems. Under normal sensory conditions, redundant information from these sensory systems provides humans with an accurate sense of their position and motion. However, altered environments, including exposure to weightlessness, can lead to conflicting visual and vestibular cues, resulting in decreased situational awareness. The TLS was designed to provide somatosensory cues to complement the visual system during EVA operations. An EVA task was simulated on a computer graphics workstation with a display of the International Space Station (ISS) and a target astronaut at an unknown location. Subjects were required to move about the ISS and acquire the target astronaut using either an auditory cue at the outset, or the TLS. Subjects used a 6 degree of freedom input device to command translational and rotational motion. The TLS was configured to act as a position aid, providing target direction information to the subject through a localized stimulus. Results show that the TLS decreases reaction time (p = 0.001) and movement time (p = 0.001) for simulated subject (astronaut) motion around the ISS. The TLS is a useful aid in increasing an astronaut's situational awareness, and warrants further testing to explore other uses, tasks and configurations.
Simulating an underwater vehicle self-correcting guidance system with Simulink
NASA Astrophysics Data System (ADS)
Fan, Hui; Zhang, Yu-Wen; Li, Wen-Zhe
2008-09-01
Underwater vehicles have already adopted self-correcting directional guidance algorithms based on multi-beam self-guidance systems, not waiting for research to determine the most effective algorithms. The main challenges facing research on these guidance systems have been effective modeling of the guidance algorithm and a means to analyze the simulation results. A simulation structure based on Simulink that dealt with both issues was proposed. Initially, a mathematical model of relative motion between the vehicle and the target was developed, which was then encapsulated as a subsystem. Next, steps for constructing a model of the self-correcting guidance algorithm based on the Stateflow module were examined in detail. Finally, a 3-D model of the vehicle and target was created in VRML, and by processing mathematical results, the model was shown moving in a visual environment. This process gives more intuitive results for analyzing the simulation. The results showed that the simulation structure performs well. The simulation program heavily used modularization and encapsulation, so has broad applicability to simulations of other dynamic systems.
Watanabe, Kumiko; Hara, Naoto; Kimijima, Masumi; Kotegawa, Yasue; Ohno, Koji; Arimoto, Ako; Mukuno, Kazuo; Hisahara, Satoru; Horie, Hidenori
2012-10-01
School children with myopia were trained using a visual stimulation device that generated an isolated blur stimulus on a visual target, with a constant retinal image size and constant brightness. Uncorrected visual acuity, cycloplegic refraction, axial length, dynamic accommodation and papillary reaction were measured to investigate the effectiveness of the training. There were 45 school children with myopia without any other ophthalmic diseases. The mean age of the children was 8.9 +/- 2.0 years (age range; 6-16)and the mean refraction was -1.56 +/- 0.58 D (mean +/- standard deviation). As a visual stimulus, a white ring on a black background with a constant ratio of visual target size to retinal image size, irrespective of the distance, was displayed on a liquid crystal display (LCD), and the LCD was quickly moved from a proximal to a distal position to produce an isolated blur stimulus. Training with this visual stimulus was carried out in the relaxation phase of accommodation. Uncorrected visual acuity, cycloplegic refraction, axial length, dynamic accommodation and pupillary reaction were investigated before training and every 3 months during the training. Of the 45 subjects, 42 (93%) could be trained for 3 consecutive months, 33 (73%) for 6 months, 23 (51%) for 9 months, and 21 (47%) for 12 months. The mean refraction decreased by 0.83 +/- 0.56 D (mean +/- standard deviation) and the mean axial length increased by 0.47 +/- 0.16 mm at 1 year, showing that the training bad some effect in improving the visual acuity. In the tests of the dynamic accommodative responses, the latency of the accommodative-phase decreased from 0.4 +/- 0.2 sec to 0.3 +/- 0.1 sec at 1 year, the gain of the accommodative-phase improved from 69.0 +/- 27.0% to 93.3 +/- 13.4%, the maximum speed of the accommodative-phase increased from 5.1 +/- 2.2 D/sec to 6.8 +/- 2.2 D/sec and the gain of the relaxation-phase significantly improved from 52.1 +/- 26.0% to 72.7 +/- 13.7% (corresponding t-test, p < 0.005). No significant changes were observed in the pupillary reaction. The training device was useful for improving the accommodative functions and accommodative excess, suggesting that it may be able to suppress the progression of low myopia, development of which is known to be strongly influenced by environmental factors.
Identification of the ideal clutter metric to predict time dependence of human visual search
NASA Astrophysics Data System (ADS)
Cartier, Joan F.; Hsu, David H.
1995-05-01
The Army Night Vision and Electronic Sensors Directorate (NVESD) has recently performed a human perception experiment in which eye tracker measurements were made on trained military observers searching for targets in infrared images. This data offered an important opportunity to evaluate a new technique for search modeling. Following the approach taken by Jeff Nicoll, this model treats search as a random walk in which the observers are in one of two states until they quit: they are either searching, or they are wandering around looking for a point of interest. When wandering they skip rapidly from point to point. When examining they move more slowly, reflecting the fact that target discrimination requires additional thought processes. In this paper we simulate the random walk, using a clutter metric to assign relative attractiveness to points of interest within the image which are competing for the observer's attention. The NVESD data indicates that a number of standard clutter metrics are good estimators of the apportionment of observer's time between wandering and examining. Conversely, the apportionment of observer time spent wandering and examining could be used to reverse engineer the ideal clutter metric which would most perfectly describe the behavior of the group of observers. It may be possible to use this technique to design the optimal clutter metric to predict performance of visual search.
Convergence and Accommodation Development Is Preprogrammed in Premature Infants.
Horwood, Anna M; Toor, Sonia S; Riddell, Patricia M
2015-08-01
This study investigated whether vergence and accommodation development in preterm infants is preprogrammed or is driven by experience. Thirty-two healthy infants, born at mean 34 weeks gestation (range, 31.2-36 weeks), were compared with 45 healthy full-term infants (mean 40.0 weeks) over a 6-month period, starting at 4 to 6 weeks postnatally. Simultaneous accommodation and convergence to a detailed target were measured using a Plusoptix PowerRefII infrared photorefractor as a target moved between 0.33 and 2 m. Stimulus/response gains and responses at 0.33 and 2 m were compared by both corrected (gestational) age and chronological (postnatal) age. When compared by their corrected age, preterm and full-term infants showed few significant differences in vergence and accommodation responses after 6 to 7 weeks of age. However, when compared by chronological age, preterm infants' responses were more variable, with significantly reduced vergence gains, reduced vergence response at 0.33 m, reduced accommodation gain, and increased accommodation at 2 m compared to full-term infants between 8 and 13 weeks after birth. When matched by corrected age, vergence and accommodation in preterm infants show few differences from full-term infants' responses. Maturation appears preprogrammed and is not advanced by visual experience. Longer periods of immature visual responses might leave preterm infants more at risk of development of oculomotor deficits such as strabismus.
Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.
77 FR 13656 - Call for Papers: National Symposium on Moving Target Research
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-07
... of moving target. There will be an accompanying poster session open for researchers and companies... dates/time 18:00 EDT): Draft Papers due April 2, 2012 Notification April 20, 2012 Poster abstracts due...
Echolocating bats use a nearly time-optimal strategy to intercept prey.
Ghose, Kaushik; Horiuchi, Timothy K; Krishnaprasad, P S; Moss, Cynthia F
2006-05-01
Acquisition of food in many animal species depends on the pursuit and capture of moving prey. Among modern humans, the pursuit and interception of moving targets plays a central role in a variety of sports, such as tennis, football, Frisbee, and baseball. Studies of target pursuit in animals, ranging from dragonflies to fish and dogs to humans, have suggested that they all use a constant bearing (CB) strategy to pursue prey or other moving targets. CB is best known as the interception strategy employed by baseball outfielders to catch ballistic fly balls. CB is a time-optimal solution to catch targets moving along a straight line, or in a predictable fashion--such as a ballistic baseball, or a piece of food sinking in water. Many animals, however, have to capture prey that may make evasive and unpredictable maneuvers. Is CB an optimum solution to pursuing erratically moving targets? Do animals faced with such erratic prey also use CB? In this paper, we address these questions by studying prey capture in an insectivorous echolocating bat. Echolocating bats rely on sonar to pursue and capture flying insects. The bat's prey may emerge from foliage for a brief time, fly in erratic three-dimensional paths before returning to cover. Bats typically take less than one second to detect, localize and capture such insects. We used high speed stereo infra-red videography to study the three dimensional flight paths of the big brown bat, Eptesicus fuscus, as it chased erratically moving insects in a dark laboratory flight room. We quantified the bat's complex pursuit trajectories using a simple delay differential equation. Our analysis of the pursuit trajectories suggests that bats use a constant absolute target direction strategy during pursuit. We show mathematically that, unlike CB, this approach minimizes the time it takes for a pursuer to intercept an unpredictably moving target. Interestingly, the bat's behavior is similar to the interception strategy implemented in some guided missiles. We suggest that the time-optimal strategy adopted by the bat is in response to the evolutionary pressures of having to capture erratic and fast moving insects.
Teaching Visual Literacy for the 21st Century.
ERIC Educational Resources Information Center
Glasgow, Jacqueline N.
1994-01-01
Discusses teaching visual literacy by teaching students how to decode advertising images, thus enabling them to move away from being passive receivers of messages to active unravelers. Shows how teachers can use concepts from semiotics to deconstruct advertising messages. (SR)
McMahon, Ryan; Papiez, Lech; Rangaraj, Dharanipathy
2007-08-01
An algorithm is presented that allows for the control of multileaf collimation (MLC) leaves based entirely on real-time calculations of the intensity delivered over the target. The algorithm is capable of efficiently correcting generalized delivery errors without requiring the interruption of delivery (self-correcting trajectories), where a generalized delivery error represents anything that causes a discrepancy between the delivered and intended intensity profiles. The intensity actually delivered over the target is continually compared to its intended value. For each pair of leaves, these comparisons are used to guide the control of the following leaf and keep this discrepancy below a user-specified value. To demonstrate the basic principles of the algorithm, results of corrected delivery are shown for a leading leaf positional error during dynamic-MLC (DMLC) IMRT delivery over a rigid moving target. It is then shown that, with slight modifications, the algorithm can be used to track moving targets in real time. The primary results of this article indicate that the algorithm is capable of accurately delivering DMLC IMRT over a rigid moving target whose motion is (1) completely unknown prior to delivery and (2) not faster than the maximum MLC leaf velocity over extended periods of time. These capabilities are demonstrated for clinically derived intensity profiles and actual tumor motion data, including situations when the target moves in some instances faster than the maximum admissible MLC leaf velocity. The results show that using the algorithm while calculating the delivered intensity every 50 ms will provide a good level of accuracy when delivering IMRT over a rigid moving target translating along the direction of MLC leaf travel. When the maximum velocities of the MLC leaves and target were 4 and 4.2 cm/s, respectively, the resulting error in the two intensity profiles used was 0.1 +/- 3.1% and -0.5 +/- 2.8% relative to the maximum of the intensity profiles. For the same target motion, the error was shown to increase rapidly as (1) the maximum MLC leaf velocity was reduced below 75% of the maximum target velocity and (2) the system response time was increased.
Search guidance is proportional to the categorical specificity of a target cue.
Schmidt, Joseph; Zelinsky, Gregory J
2009-10-01
Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli
Kamke, Marc R.; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.
Kamke, Marc R; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.
Found and Missed: Failing to Recognize a Search Target despite Moving It
ERIC Educational Resources Information Center
Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel
2012-01-01
We present results from five search experiments using a novel "unpacking" paradigm in which participants use a mouse to sort through random heaps of distractors to locate the target. We report that during this task participants often fail to recognize the target despite moving it, and despite having looked at the item. Additionally, the missed…
Synthetic perspective optical flow: Influence on pilot control tasks
NASA Technical Reports Server (NTRS)
Bennett, C. Thomas; Johnson, Walter W.; Perrone, John A.; Phatak, Anil V.
1989-01-01
One approach used to better understand the impact of visual flow on control tasks has been to use synthetic perspective flow patterns. Such patterns are the result of apparent motion across a grid or random dot display. Unfortunately, the optical flow so generated is based on a subset of the flow information that exists in the real world. The danger is that the resulting optical motions may not generate the visual flow patterns useful for actual flight control. Researchers conducted a series of studies directed at understanding the characteristics of synthetic perspective flow that support various pilot tasks. In the first of these, they examined the control of altitude over various perspective grid textures (Johnson et al., 1987). Another set of studies was directed at studying the head tracking of targets moving in a 3-D coordinate system. These studies, parametric in nature, utilized both impoverished and complex virtual worlds represented by simple perspective grids at one extreme, and computer-generated terrain at the other. These studies are part of an applied visual research program directed at understanding the design principles required for the development of instruments displaying spatial orientation information. The experiments also highlight the need for modeling the impact of spatial displays on pilot control tasks.
Xiao, Naiqi G.; Quinn, Paul C.; Wheeler, Andrea; Pascalis, Olivier; Lee, Kang
2014-01-01
A left visual field (LVF) bias has been consistently reported in eye movement patterns when adults look at face stimuli, which reflects hemispheric lateralization of face processing and eye movements. However, the emergence of the LVF attentional bias in infancy is less clear. The present study investigated the emergence and development of the LVF attentional bias in infants from 3 to 9 months of age with moving face stimuli. We specifically examined the naturalness of facial movements in infants’ LVF attentional bias by comparing eye movement patterns in naturally and artificially moving faces. Results showed that 3- to 5-month-olds exhibited the LVF attentional bias only in the lower half of naturally moving faces, but not in artificially moving faces. Six- to 9-month-olds showed the LVF attentional bias in both the lower and upper face halves only in naturally moving, but not in artificially moving faces. These results suggest that the LVF attentional bias for face processing may emerge around 3 months of age and is driven by natural facial movements. The LVF attentional bias reflects the role of natural face experience in real life situations that may drive the development of hemispheric lateralization of face processing in infancy. PMID:25064049
Xiong, Ji; Li, Fangmin; Zhao, Ning; Jiang, Na
2014-01-01
With characteristics of low-cost and easy deployment, the distributed wireless pyroelectric infrared sensor network has attracted extensive interest, which aims to make it an alternate infrared video sensor in thermal biometric applications for tracking and identifying human targets. In these applications, effectively processing signals collected from sensors and extracting the features of different human targets has become crucial. This paper proposes the application of empirical mode decomposition and the Hilbert-Huang transform to extract features of moving human targets both in the time domain and the frequency domain. Moreover, the support vector machine is selected as the classifier. The experimental results demonstrate that by using this method the identification rates of multiple moving human targets are around 90%. PMID:24759117
Effect of a moving optical environment on the subjective median.
DOT National Transportation Integrated Search
1971-04-01
The placement of a point in the median vertical plane under the influence of a moving optical environment was tested in 12 subjects. It was found that the median plane was displaced in the same direction as the movement of the visual environment when...
Teaching the iPhone with Voiceover Accessibility to People with Visual Impairments
ERIC Educational Resources Information Center
Celusnak, Brian M.
2016-01-01
Moving from a conventional telephone keypad to a cellular telephone with a touchscreen can seem quite challenging for some people. When one is visually impaired, there is always the option of using VoiceOver, the iPhone's built-in access technology that is designed to allow individuals with visual impairments the ability to access the visual…
ERIC Educational Resources Information Center
Sapp, Wendy
2011-01-01
Young children with visual impairments face many challenges as they learn to orient to and move through their environment, the beginnings of orientation and mobility (O&M). Children who are visually impaired must learn many concepts (such as body parts and positional words) and skills (like body movement and interpreting sensory information) to…
ERIC Educational Resources Information Center
Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca
2010-01-01
Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…
Trivedi, Chintan A.; Bollmann, Johann H.
2013-01-01
Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback. PMID:23675322
Comparison of visual sensitivity to human and object motion in autism spectrum disorder.
Kaiser, Martha D; Delmolino, Lara; Tanaka, James W; Shiffrar, Maggie
2010-08-01
Successful social behavior requires the accurate detection of other people's movements. Consistent with this, typical observers demonstrate enhanced visual sensitivity to human movement relative to equally complex, nonhuman movement [e.g., Pinto & Shiffrar, 2009]. A psychophysical study investigated visual sensitivity to human motion relative to object motion in observers with autism spectrum disorder (ASD). Participants viewed point-light depictions of a moving person and, for comparison, a moving tractor and discriminated between coherent and scrambled versions of these stimuli in unmasked and masked displays. There were three groups of participants: young adults with ASD, typically developing young adults, and typically developing children. Across masking conditions, typical observers showed enhanced visual sensitivity to human movement while observers in the ASD group did not. Because the human body is an inherently social stimulus, this result is consistent with social brain theories [e.g., Pelphrey & Carter, 2008; Schultz, 2005] and suggests that the visual systems of individuals with ASD may not be tuned for the detection of socially relevant information such as the presence of another person. Reduced visual sensitivity to human movements could compromise important social behaviors including, for example, gesture comprehension.
Yarossi, Mathew; Manuweera, Thushini; Adamovich, Sergei V.; Tunik, Eugene
2017-01-01
Mirror visual feedback (MVF) training is a promising technique to promote activation in the lesioned hemisphere following stroke, and aid recovery. However, current outcomes of MVF training are mixed, in part, due to variability in the task undertaken during MVF. The present study investigated the hypothesis that movements directed toward visual targets may enhance MVF modulation of motor cortex (M1) excitability ipsilateral to the trained hand compared to movements without visual targets. Ten healthy subjects participated in a 2 × 2 factorial design in which feedback (veridical, mirror) and presence of a visual target (target present, target absent) for a right index-finger flexion task were systematically manipulated in a virtual environment. To measure M1 excitability, transcranial magnetic stimulation (TMS) was applied to the hemisphere ipsilateral to the trained hand to elicit motor evoked potentials (MEPs) in the untrained first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles at rest prior to and following each of four 2-min blocks of 30 movements (B1–B4). Targeted movement kinematics without visual feedback was measured before and after training to assess learning and transfer. FDI MEPs were decreased in B1 and B2 when movements were made with veridical feedback and visual targets were absent. FDI MEPs were decreased in B2 and B3 when movements were made with mirror feedback and visual targets were absent. FDI MEPs were increased in B3 when movements were made with mirror feedback and visual targets were present. Significant MEP changes were not present for the uninvolved ADM, suggesting a task-specific effect. Analysis of kinematics revealed learning occurred in visual target-directed conditions, but transfer was not sensitive to mirror feedback. Results are discussed with respect to current theoretical mechanisms underlying MVF-induced changes in ipsilateral excitability. PMID:28553218
An invisible touch: Body-related multisensory conflicts modulate visual consciousness.
Salomon, Roy; Galli, Giulia; Łukowska, Marta; Faivre, Nathan; Ruiz, Javier Bello; Blanke, Olaf
2016-07-29
The majority of scientific studies on consciousness have focused on vision, exploring the cognitive and neural mechanisms of conscious access to visual stimuli. In parallel, studies on bodily consciousness have revealed that bodily (i.e. tactile, proprioceptive, visceral, vestibular) signals are the basis for the sense of self. However, the role of bodily signals in the formation of visual consciousness is not well understood. Here we investigated how body-related visuo-tactile stimulation modulates conscious access to visual stimuli. We used a robotic platform to apply controlled tactile stimulation to the participants' back while they viewed a dot moving either in synchrony or asynchrony with the touch on their back. Critically, the dot was rendered invisible through continuous flash suppression. Manipulating the visual context by presenting the dot moving on either a body form, or a non-bodily object we show that: (i) conflict induced by synchronous visuo-tactile stimulation in a body context is associated with a delayed conscious access compared to asynchronous visuo-tactile stimulation, (ii) this effect occurs only in the context of a visual body form, and (iii) is not due to detection or response biases. The results indicate that body-related visuo-tactile conflicts impact visual consciousness by facilitating access of non-conflicting visual information to awareness, and that these are sensitive to the visual context in which they are presented, highlighting the interplay between bodily signals and visual experience. Copyright © 2015 Elsevier Ltd. All rights reserved.
High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.
Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min
2012-01-01
The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.
A direct imaging search for close stellar and sub-stellar companions to young nearby stars
NASA Astrophysics Data System (ADS)
Vogt, N.; Mugrauer, M.; Neuhäuser, R.; Schmidt, T. O. B.; Contreras-Quijada, A.; Schmidt, J. G.
2015-01-01
A total of 28 young nearby stars (ages {≤ 60} Myr) have been observed in the K_s-band with the adaptive optics imager Naos-Conica of the Very Large Telescope at the Paranal Observatory in Chile. Among the targets are ten visual binaries and one triple system at distances between 10 and 130 pc, all previously known. During a first observing epoch a total of 20 faint stellar or sub-stellar companion-candidates were detected around seven of the targets. These fields, as well as most of the stellar binaries, were re-observed with the same instrument during a second epoch, about one year later. We present the astrometric observations of all binaries. Their analysis revealed that all stellar binaries are co-moving. In two cases (HD 119022 AB and FG Aqr B/C) indications for significant orbital motions were found. However, all sub-stellar companion candidates turned out to be non-moving background objects except PZ Tel which is part of this project but whose results were published elsewhere. Detection limits were determined for all targets, and limiting masses were derived adopting three different age values; they turn out to be less than 10 Jupiter masses in most cases, well below the brown dwarf mass range. The fraction of stellar multiplicity and of the sub-stellar companion occurrence in the star forming regions in Chamaeleon are compared to the statistics of our search, and possible reasons for the observed differences are discussed. Based on observations made with ESO telescopes at Paranal Observatory under programme IDs 083.C-0150(B), 084.C-0364(A), 084.C-0364(B), 084.C-0364(C), 086.C-0600(A) and 086.C-0600(B).
Matching optical flow to motor speed in virtual reality while running on a treadmill
Lafortuna, Claudio L.; Mugellini, Elena; Abou Khaled, Omar
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care. PMID:29641564
Matching optical flow to motor speed in virtual reality while running on a treadmill.
Caramenti, Martina; Lafortuna, Claudio L; Mugellini, Elena; Abou Khaled, Omar; Bresciani, Jean-Pierre; Dubois, Amandine
2018-01-01
We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.
Fukushima, Kikuro; Fukushima, Junko; Warabi, Tateo; Barnes, Graham R.
2013-01-01
Smooth-pursuit eye movements allow primates to track moving objects. Efficient pursuit requires appropriate target selection and predictive compensation for inherent processing delays. Prediction depends on expectation of future object motion, storage of motion information and use of extra-retinal mechanisms in addition to visual feedback. We present behavioral evidence of how cognitive processes are involved in predictive pursuit in normal humans and then describe neuronal responses in monkeys and behavioral responses in patients using a new technique to test these cognitive controls. The new technique examines the neural substrate of working memory and movement preparation for predictive pursuit by using a memory-based task in macaque monkeys trained to pursue (go) or not pursue (no-go) according to a go/no-go cue, in a direction based on memory of a previously presented visual motion display. Single-unit task-related neuronal activity was examined in medial superior temporal cortex (MST), supplementary eye fields (SEF), caudal frontal eye fields (FEF), cerebellar dorsal vermis lobules VI–VII, caudal fastigial nuclei (cFN), and floccular region. Neuronal activity reflecting working memory of visual motion direction and go/no-go selection was found predominantly in SEF, cerebellar dorsal vermis and cFN, whereas movement preparation related signals were found predominantly in caudal FEF and the same cerebellar areas. Chemical inactivation produced effects consistent with differences in signals represented in each area. When applied to patients with Parkinson's disease (PD), the task revealed deficits in movement preparation but not working memory. In contrast, patients with frontal cortical or cerebellar dysfunction had high error rates, suggesting impaired working memory. We show how neuronal activity may be explained by models of retinal and extra-retinal interaction in target selection and predictive control and thus aid understanding of underlying pathophysiology. PMID:23515488
Mental imagery of gravitational motion.
Gravano, Silvio; Zago, Myrka; Lacquaniti, Francesco
2017-10-01
There is considerable evidence that gravitational acceleration is taken into account in the interaction with falling targets through an internal model of Earth gravity. Here we asked whether this internal model is accessed also when target motion is imagined rather than real. In the main experiments, naïve participants grasped an imaginary ball, threw it against the ceiling, and caught it on rebound. In different blocks of trials, they had to imagine that the ball moved under terrestrial gravity (1g condition) or under microgravity (0g) as during a space flight. We measured the speed and timing of the throwing and catching actions, and plotted ball flight duration versus throwing speed. Best-fitting duration-speed curves estimate the laws of ball motion implicit in the participant's performance. Surprisingly, we found duration-speed curves compatible with 0g for both the imaginary 0g condition and the imaginary 1g condition, despite the familiarity with Earth gravity effects and the added realism of performing the throwing and catching actions. In a control experiment, naïve participants were asked to throw the imaginary ball vertically upwards at different heights, without hitting the ceiling, and to catch it on its way down. All participants overestimated ball flight durations relative to the durations predicted by the effects of Earth gravity. Overall, the results indicate that mental imagery of motion does not have access to the internal model of Earth gravity, but resorts to a simulation of visual motion. Because visual processing of accelerating/decelerating motion is poor, visual imagery of motion at constant speed or slowly varying speed appears to be the preferred mode to perform the tasks. Copyright © 2017 Elsevier Ltd. All rights reserved.
A neural model of visual figure-ground segregation from kinetic occlusion.
Barnes, Timothy; Mingolla, Ennio
2013-01-01
Freezing is an effective defense strategy for some prey, because their predators rely on visual motion to distinguish objects from their surroundings. An object moving over a background progressively covers (deletes) and uncovers (accretes) background texture while simultaneously producing discontinuities in the optic flow field. These events unambiguously specify kinetic occlusion and can produce a crisp edge, depth perception, and figure-ground segmentation between identically textured surfaces--percepts which all disappear without motion. Given two abutting regions of uniform random texture with different motion velocities, one region appears to be situated farther away and behind the other (i.e., the ground) if its texture is accreted or deleted at the boundary between the regions, irrespective of region and boundary velocities. Consequently, a region with moving texture appears farther away than a stationary region if the boundary is stationary, but it appears closer (i.e., the figure) if the boundary is moving coherently with the moving texture. A computational model of visual areas V1 and V2 shows how interactions between orientation- and direction-selective cells first create a motion-defined boundary and then signal kinetic occlusion at that boundary. Activation of model occlusion detectors tuned to a particular velocity results in the model assigning the adjacent surface with a matching velocity to the far depth. A weak speed-depth bias brings faster-moving texture regions forward in depth in the absence of occlusion (shearing motion). These processes together reproduce human psychophysical reports of depth ordering for key cases of kinetic occlusion displays. Copyright © 2012 Elsevier Ltd. All rights reserved.
Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka
2017-04-01
Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.
NASA Astrophysics Data System (ADS)
Duong, Tuan A.; Duong, Nghi; Le, Duong
2017-01-01
In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.
Multiple targets detection method in detection of UWB through-wall radar
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Yang, Chuanfa; Zhao, Xingwen; Tian, Xianzhong
2017-11-01
In this paper, the problems and difficulties encountered in the detection of multiple moving targets by UWB radar are analyzed. The experimental environment and the penetrating radar system are established. An adaptive threshold method based on local area is proposed to effectively filter out clutter interference The objective of the moving target is analyzed, and the false target is further filtered out by extracting the target feature. Based on the correlation between the targets, the target matching algorithm is proposed to improve the detection accuracy. Finally, the effectiveness of the above method is verified by practical experiment.
NASA Technical Reports Server (NTRS)
Lackner, J. R.; Levine, M. S.
1979-01-01
Human experiments are carried out which support the observation of Goodwin (1973) and Goodwin et al. (1972) that vibration of skeletal muscles can elicit illusory limb motion. These experiments extend the class of possible myesthetic illusions by showing that vibration of the appropriate muscles can produce illusory body motion in nearly any desired direction. Such illusory changes in posture occur only when visual information about body orientation is absent; these changes in apparent posture are sometimes accompanied by a slow-phase nystagmus that compensates for the direction of apparent body motion. During illusory body motion a stationary target light that is fixated will appear to move with the body at the same apparent velocity. However, this pattern of apparent body motion and conjoint visual - defined as propriogyral illusion - is suppressed if the subject is in a fully illuminated environment providing cues about true body orientation. Persuasive evidence is thus provided for the contribution of both muscle afferent and touch-pressure information to the supraspinal mechanisms that determine apparent orientation on the basis of ongoing patterns of interoceptive and exteroceptive activity.
Detection of Fast Moving and Accelerating Targets Compensating Range and Doppler Migration
2014-06-01
Radon -Fourier transform has been introduced to realize long- term coherent integration of the moving targets with range migration [8, 9]. Radon ...2010) Long-time coherent integration for radar target detection base on Radon -Fourier transform, in Proceedings of the IEEE Radar Conference, pp...432–436. 9. Xu, J., Yu, J., Peng, Y. & Xia, X. (2011) Radon -Fourier transform for radar target detection, I: Generalized Doppler filter bank, IEEE
Limits to Clutter Cancellation in Multi-Aperture GMTI Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.; Bickel, Douglas L.
2015-03-01
Multi-aperture or multi-subaperture antennas are fundamental to Ground Moving Target Indicator (GMTI) radar systems in order to detect slow-moving targets with Doppler characteristics similar to clutter. Herein we examine the performance of several subaperture architectures for their clutter cancelling performance. Significantly, more antenna phase centers isn’t always better, and in fact is sometimes worse, for detecting targets.
Head-bobbing behavior in foraging Whooping Cranes
Cronin, T.; Kinloch, M.; Olsen, Glenn H.
2006-01-01
Many species of cursorial birds 'head-bob', that is, they alternately thrust the head forward, then hold it stiII as they walk. Such a motion stabilizes visual fields intermittently and could be critical for visual search; yet the time available for stabilization vs. forward thrust varies with walking speed. Whooping Cranes (Grus americana) are extremely tall birds that visually search the ground for seeds, berries, and small prey. We examined head movements in unrestrained Whooping Cranes using digital video subsequently analyzed with a computer graphical overlay. When foraging, the cranes walk at speeds that allow the head to be held still for at least 50% of the time. This behavior is thought to balance the two needs for covering as much ground as possible and for maximizing the time for visual fixation of the ground in the search for prey. Our results strongly suggest that in cranes, and probably many other bird species, visual fixation of the ground is required for object detection and identification. The thrust phase of the head-bobbing cycle is probably also important for vision. As the head moves forward, the movement generates visual flow and motion parallax, providing visual cues for distances and the relative locations of objects. The eyes commonly change their point of fixation when the head is moving too, suggesting that they remain visually competent throughout the entire cycle of thrust and stabilization.
Ma, Hui-Ing; Hwang, Wen-Juh; Wang, Ching-Yi; Fang, Jing-Jing; Leong, Iat-Fai; Wang, Tsui-Ying
2012-10-01
We used a trunk-assisted prehension task to examine the effect of task (reaching for stationary vs. moving targets) and environmental constraints (virtual reality [VR] vs. physical reality) on the temporal control of trunk and arm motions in people with Parkinson's disease (PD). Twenty-four participants with PD and 24 age-matched controls reached for and grasped a ball that was either stationary or moving along a ramp 120% of arm length away. In a similar VR task, participants reached for a virtual ball that was either stationary or moving. Movement speed was measured as trunk and arm movement times (MTs); trunk-arm coordination was measured as onset interval and offset interval between trunk and arm motions, as well as a summarized index-desynchrony score. In both VR and physical reality, the PD group had longer trunk and arm MTs than the control group when reaching for stationary balls (p<.001). When reaching for moving balls in VR and physical reality, however, the PD group had lower trunk and arm MTs, onset intervals, and desynchrony scores (p<.001). For the PD group, VR induced shorter trunk MTs, shorter offset intervals, and lower desynchrony scores than did physical reality when reaching for moving balls (p<.001). These findings suggest that using real moving targets in trunk-assisted prehension tasks improves the speed and synchronization of trunk and arm motions in people with PD, and that using virtual moving targets may induce a movement termination strategy different from that used in physical reality. Copyright © 2012 Elsevier B.V. All rights reserved.
Effects of aging on pointing movements under restricted visual feedback conditions.
Zhang, Liancun; Yang, Jiajia; Inai, Yoshinobu; Huang, Qiang; Wu, Jinglong
2015-04-01
The goal of this study was to investigate the effects of aging on pointing movements under restricted visual feedback of hand movement and target location. Fifteen young subjects and fifteen elderly subjects performed pointing movements under four restricted visual feedback conditions that included full visual feedback of hand movement and target location (FV), no visual feedback of hand movement and target location condition (NV), no visual feedback of hand movement (NM) and no visual feedback of target location (NT). This study suggested that Fitts' law applied for pointing movements of the elderly adults under different visual restriction conditions. Moreover, significant main effect of aging on movement times has been found in all four tasks. The peripheral and central changes may be the key factors for these different characteristics. Furthermore, no significant main effects of age on the mean accuracy rate under condition of restricted visual feedback were found. The present study suggested that the elderly subjects made a very similar use of the available sensory information as young subjects under restricted visual feedback conditions. In addition, during the pointing movement, information about the hand's movement was more useful than information about the target location for young and elderly subjects. Copyright © 2014 Elsevier B.V. All rights reserved.
Riga, Maurizio S; Lladó-Pelfort, Laia; Artigas, Francesc; Celada, Pau
2017-12-06
5-MeO-DMT is a natural hallucinogen acting as serotonin 5-HT 1A /5-HT 2A receptor agonist. Its ability to evoke hallucinations could be used to study the neurobiology of psychotic symptoms and to identify new treatment targets. Moreover, recent studies revealed the therapeutic potential of serotonin hallucinogens in treating mood and anxiety disorders. Our previous results in anesthetized animals show that 5-MeO-DMT alters cortical activity via 5-HT 1A and 5-HT 2A receptors. Here, we examined 5-MeO-DMT effects on oscillatory activity in prefrontal (PFC) and visual (V1) cortices, and in mediodorsal thalamus (MD) of freely-moving wild-type (WT) and 5-HT 2A -R knockout (KO2A) mice. We performed local field potential multi-recordings evaluating the power at different frequency bands and coherence between areas. We also examined the prevention of 5-MeO-DMT effects by the 5-HT 1A -R antagonist WAY-100635. 5-MeO-DMT affected oscillatory activity more in cortical than in thalamic areas. More marked effects were observed in delta power in V1 of KO2A mice. 5-MeO-DMT increased beta band coherence between all examined areas. In KO2A mice, WAY100635 prevented most of 5-MeO-DMT effects on oscillatory activity. The present results indicate that hallucinatory activity of 5-MeO-DMT is likely mediated by simultaneous alteration of prefrontal and visual activities. The prevention of these effects by WAY-100635 in KO2A mice supports the potential usefulness of 5-HT 1A receptor antagonists to treat visual hallucinations. 5-MeO-DMT effects on PFC theta activity and cortico-thalamic coherence may be related to its antidepressant activity. Copyright © 2017. Published by Elsevier Ltd.
Plug-and-play web-based visualization of mobile air monitoring data
The collection of air measurements in real-time on moving platforms, such as wearable, bicycle-mounted, or vehicle-mounted air sensors, is becoming an increasingly common method to investigate local air quality. However, visualizing and analyzing geospatial air monitoring data r...
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-01-01
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-02-12
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.
Clustering analysis of moving target signatures
NASA Astrophysics Data System (ADS)
Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto
2010-04-01
Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.
Keshner, E A; Kenyon, R V
2000-01-01
We examined the effect of a 3-dimensional stereoscopic scene on segmental stabilization. Eight subjects participated in static sway and locomotion experiments with a visual scene that moved sinusoidally or at constant velocity about the pitch or roll axes. Segmental displacements, Fast Fourier Transforms, and Root Mean Square values were calculated. In both pitch and roll, subjects exhibited greater magnitudes of motion in head and trunk than ankle. Smaller amplitudes and frequent phase reversals suggested control of the ankle by segmental proprioceptive inputs and ground reaction forces rather than by the visual-vestibular signals. Postural controllers may set limits of motion at each body segment rather than be governed solely by a perception of the visual vertical. Two locomotor strategies were also exhibited, implying that some subjects could override the effect of the roll axis optic flow field. Our results demonstrate task dependent differences that argue against using static postural responses to moving visual fields when assessing more dynamic tasks.
Sounds Activate Visual Cortex and Improve Visual Discrimination
Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.
2014-01-01
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419
The reference frame for encoding and retention of motion depends on stimulus set size.
Huynh, Duong; Tripathy, Srimant P; Bedell, Harold E; Öğmen, Haluk
2017-04-01
The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.
Keshner, E A; Dhaher, Y
2008-07-01
Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29-31 years) and 3 visually sensitive (27-57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a three-dimensional model of joint motion was developed to examine gross head motion in three planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field could modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms.
Dividing time: concurrent timing of auditory and visual events by young and elderly adults.
McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H
2010-07-01
This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.
Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance
Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836
Nakamura, S; Shimojo, S
1998-10-01
The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.
An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation
Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin
2014-01-01
In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035
Lee, Daniel J; Recabal, Pedro; Sjoberg, Daniel D; Thong, Alan; Lee, Justin K; Eastham, James A; Scardino, Peter T; Vargas, Hebert Alberto; Coleman, Jonathan; Ehdaie, Behfar
2016-09-01
We compared the diagnostic outcomes of magnetic resonance-ultrasound fusion and visually targeted biopsy for targeting regions of interest on prostate multiparametric magnetic resonance imaging. Patients presenting for prostate biopsy with regions of interest on multiparametric magnetic resonance imaging underwent magnetic resonance imaging targeted biopsy. For each region of interest 2 visually targeted cores were obtained, followed by 2 cores using a magnetic resonance-ultrasound fusion device. Our primary end point was the difference in the detection of high grade (Gleason 7 or greater) and any grade cancer between visually targeted and magnetic resonance-ultrasound fusion, investigated using McNemar's method. Secondary end points were the difference in detection rate by biopsy location using a logistic regression model and the difference in median cancer length using the Wilcoxon signed rank test. We identified 396 regions of interest in 286 men. The difference in the detection of high grade cancer between magnetic resonance-ultrasound fusion biopsy and visually targeted biopsy was -1.4% (95% CI -6.4 to 3.6, p=0.6) and for any grade cancer the difference was 3.5% (95% CI -1.9 to 8.9, p=0.2). Median cancer length detected by magnetic resonance-ultrasound fusion and visually targeted biopsy was 5.5 vs 5.8 mm, respectively (p=0.8). Magnetic resonance-ultrasound fusion biopsy detected 15% more cancers in the transition zone (p=0.046) and visually targeted biopsy detected 11% more high grade cancer at the prostate base (p=0.005). Only 52% of all high grade cancers were detected by both techniques. We found no evidence of a significant difference in the detection of high grade or any grade cancer between visually targeted and magnetic resonance-ultrasound fusion biopsy. However, the performance of each technique varied in specific biopsy locations and the outcomes of both techniques were complementary. Combining visually targeted biopsy and magnetic resonance-ultrasound fusion biopsy may optimize the detection of prostate cancer. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.