Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing
2013-01-01
The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.
Huang, Chien-Ting; Hwang, Ing-Shiou
2012-01-01
Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498
The role of visual attention in multiple object tracking: evidence from ERPs.
Doran, Matthew M; Hoffman, James E
2010-01-01
We examined the role of visual attention in the multiple object tracking (MOT) task by measuring the amplitude of the N1 component of the event-related potential (ERP) to probe flashes presented on targets, distractors, or empty background areas. We found evidence that visual attention enhances targets and suppresses distractors (Experiment 1 & 3). However, we also found that when tracking load was light (two targets and two distractors), accurate tracking could be carried out without any apparent contribution from the visual attention system (Experiment 2). Our results suggest that attentional selection during MOT is flexibly determined by task demands as well as tracking load and that visual attention may not always be necessary for accurate tracking.
Alvarez, George A.; Cavanagh, Patrick
2014-01-01
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651
Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick
2014-08-27
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator
NASA Astrophysics Data System (ADS)
Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi
Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.
Visual Target Tracking in the Presence of Unknown Observer Motion
NASA Technical Reports Server (NTRS)
Williams, Stephen; Lu, Thomas
2009-01-01
Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.
Dissociable Frontal Controls during Visible and Memory-guided Eye-Tracking of Moving Targets
Ding, Jinhong; Powell, David; Jiang, Yang
2009-01-01
When tracking visible or occluded moving targets, several frontal regions including the frontal eye fields (FEF), dorsal-lateral prefrontal cortex (DLPFC), and Anterior Cingulate Cortex (ACC) are involved in smooth pursuit eye movements (SPEM). To investigate how these areas play different roles in predicting future locations of moving targets, twelve healthy college students participated in a smooth pursuit task of visual and occluded targets. Their eye movements and brain responses measured by event-related functional MRI were simultaneously recorded. Our results show that different visual cues resulted in time discrepancies between physical and estimated pursuit time only when the moving dot was occluded. Visible phase velocity gain was higher than that of occlusion phase. We found bilateral FEF association with eye-movement whether moving targets are visible or occluded. However, the DLPFC and ACC showed increased activity when tracking and predicting locations of occluded moving targets, and were suppressed during smooth pursuit of visible targets. When visual cues were increasingly available, less activation in the DLPFC and the ACC was observed. Additionally, there was a significant hemisphere effect in DLPFC, where right DLPFC showed significantly increased responses over left when pursuing occluded moving targets. Correlation results revealed that DLPFC, the right DLPFC in particular, communicates more with FEF during tracking of occluded moving targets (from memory). The ACC modulates FEF more during tracking of visible targets (likely related to visual attention). Our results suggest that DLPFC and ACC modulate FEF and cortical networks differentially during visible and memory-guided eye tracking of moving targets. PMID:19434603
The semantic category-based grouping in the Multiple Identity Tracking task.
Wei, Liuqing; Zhang, Xuemin; Li, Zhen; Liu, Jingyao
2018-01-01
In the Multiple Identity Tracking (MIT) task, categorical distinctions between targets and distractors have been found to facilitate tracking (Wei, Zhang, Lyu, & Li in Frontiers in Psychology, 7, 589, 2016). The purpose of this study was to further investigate the reasons for the facilitation effect, through six experiments. The results of Experiments 1-3 excluded the potential explanations of visual distinctiveness, attentional distribution strategy, and a working memory mechanism, respectively. When objects' visual information was preserved and categorical information was removed, the facilitation effect disappeared, suggesting that the visual distinctiveness between targets and distractors was not the main reason for the facilitation effect. Moreover, the facilitation effect was not the result of strategically shifting the attentional distribution, because the targets received more attention than the distractors in all conditions. Additionally, the facilitation effect did not come about because the identities of targets were encoded and stored in visual working memory to assist in the recovery from tracking errors; when working memory was disturbed by the object identities changing during tracking, the facilitation effect still existed. Experiments 4 and 5 showed that observers grouped targets together and segregated them from distractors on the basis of their categorical information. By doing this, observers could largely avoid distractor interference with tracking and improve tracking performance. Finally, Experiment 6 indicated that category-based grouping is not an automatic, but a goal-directed and effortful, strategy. In summary, the present findings show that a semantic category-based target-grouping mechanism exists in the MIT task, which is likely to be the major reason for the tracking facilitation effect.
Hue distinctiveness overrides category in determining performance in multiple object tracking.
Sun, Mengdan; Zhang, Xuemin; Fan, Lingxia; Hu, Luming
2018-02-01
The visual distinctiveness between targets and distractors can significantly facilitate performance in multiple object tracking (MOT), in which color is a feature that has been commonly used. However, the processing of color can be more than "visual." Color is continuous in chromaticity, while it is commonly grouped into discrete categories (e.g., red, green). Evidence from color perception suggested that color categories may have a unique role in visual tasks independent of its chromatic appearance. Previous MOT studies have not examined the effect of chromatic and categorical distinctiveness on tracking separately. The current study aimed to reveal how chromatic (hue) and categorical distinctiveness of color between the targets and distractors affects tracking performance. With four experiments, we showed that tracking performance was largely facilitated by the increasing hue distance between the target set and the distractor set, suggesting that perceptual grouping was formed based on hue distinctiveness to aid tracking. However, we found no color categorical effect, because tracking performance was not significantly different when the targets and distractors were from the same or different categories. It was concluded that the chromatic distinctiveness of color overrides category in determining tracking performance, suggesting a dominant role of perceptual feature in MOT.
KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery
NASA Astrophysics Data System (ADS)
Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan
2013-05-01
KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.
Visual Target Tracking on the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Kim, Won S.; Biesiadecki, Jeffrey J.; Ali, Khaled S.
2008-01-01
Visual Target Tracking (VTT) has been implemented in the new Mars Exploration Rover (MER) Flight Software (FSW) R9.2 release, which is now running on both Spirit and Opportunity rovers. Applying the normalized cross-correlation (NCC) algorithm with template image magnification and roll compensation on MER Navcam images, VTT tracks the target and enables the rover to approach the target within a few cm over a 10 m traverse. Each VTT update takes 1/2 to 1 minute on the rovers, 2-3 times faster than one Visual Odometry (Visodom) update. VTT is a key element to achieve a target approach and instrument placement over a 10-m run in a single sol in contrast to the original baseline of 3 sols. VTT has been integrated into the MER FSW so that it can operate with any combination of blind driving, Autonomous Navigation (Autonav) with hazard avoidance, and Visodom. VTT can either guide the rover towards the target or simply image the target as the rover drives by. Three recent VTT operational checkouts on Opportunity were all successful, tracking the selected target reliably within a few pixels.
Attentional enhancement during multiple-object tracking.
Drew, Trafton; McCollough, Andrew W; Horowitz, Todd S; Vogel, Edward K
2009-04-01
What is the role of attention in multiple-object tracking? Does attention enhance target representations, suppress distractor representations, or both? It is difficult to ask this question in a purely behavioral paradigm without altering the very attentional allocation one is trying to measure. In the present study, we used event-related potentials to examine the early visual evoked responses to task-irrelevant probes without requiring an additional detection task. Subjects tracked two targets among four moving distractors and four stationary distractors. Brief probes were flashed on targets, moving distractors, stationary distractors, or empty space. We obtained a significant enhancement of the visually evoked P1 and N1 components (approximately 100-150 msec) for probes on targets, relative to distractors. Furthermore, good trackers showed larger differences between target and distractor probes than did poor trackers. These results provide evidence of early attentional enhancement of tracked target items and also provide a novel approach to measuring attentional allocation during tracking.
Visual Target Tracking on the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Kim, Won; Biesiadecki, Jeffrey; Ali, Khaled
2008-01-01
Visual target tracking (VTT) software has been incorporated into Release 9.2 of the Mars Exploration Rover (MER) flight software, now running aboard the rovers Spirit and Opportunity. In the VTT operation (see figure), the rover is driven in short steps between stops and, at each stop, still images are acquired by actively aimed navigation cameras (navcams) on a mast on the rover (see artistic rendition). The VTT software processes the digitized navcam images so as to track a target reliably and to make it possible to approach the target accurately to within a few centimeters over a 10-m traverse.
NASA Astrophysics Data System (ADS)
Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
Normal aging delays and compromises early multifocal visual attention during object tracking.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-02-01
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
Gregori Grgič, Regina; Calore, Enrico; de'Sperati, Claudio
2016-01-01
Whereas overt visuospatial attention is customarily measured with eye tracking, covert attention is assessed by various methods. Here we exploited Steady-State Visual Evoked Potentials (SSVEPs) - the oscillatory responses of the visual cortex to incoming flickering stimuli - to record the movements of covert visuospatial attention in a way operatively similar to eye tracking (attention tracking), which allowed us to compare motion observation and motion extrapolation with and without eye movements. Observers fixated a central dot and covertly tracked a target oscillating horizontally and sinusoidally. In the background, the left and the right halves of the screen flickered at two different frequencies, generating two SSVEPs in occipital regions whose size varied reciprocally as observers attended to the moving target. The two signals were combined into a single quantity that was modulated at the target frequency in a quasi-sinusoidal way, often clearly visible in single trials. The modulation continued almost unchanged when the target was switched off and observers mentally extrapolated its motion in imagery, and also when observers pointed their finger at the moving target during covert tracking, or imagined doing so. The amplitude of modulation during covert tracking was ∼25-30% of that measured when observers followed the target with their eyes. We used 4 electrodes in parieto-occipital areas, but similar results were achieved with a single electrode in Oz. In a second experiment we tested ramp and step motion. During overt tracking, SSVEPs were remarkably accurate, showing both saccadic-like and smooth pursuit-like modulations of cortical responsiveness, although during covert tracking the modulation deteriorated. Covert tracking was better with sinusoidal motion than ramp motion, and better with moving targets than stationary ones. The clear modulation of cortical responsiveness recorded during both overt and covert tracking, identical for motion observation and motion extrapolation, suggests to include covert attention movements in enactive theories of mental imagery. Copyright © 2015 Elsevier Ltd. All rights reserved.
Global Positioning System Synchronized Active Light Autonomous Docking System
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Bell, Joseph L. (Inventor)
1996-01-01
A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.
Global Positioning System Synchronized Active Light Autonomous Docking System
NASA Technical Reports Server (NTRS)
Howard, Richard (Inventor)
1994-01-01
A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.
Rover-based visual target tracking validation and mission infusion
NASA Technical Reports Server (NTRS)
Kim, Won S.; Steele, Robert D.; Ansar, Adnan I.; Ali, Khaled; Nesnas, Issa
2005-01-01
The Mars Exploration Rovers (MER'03), Spirit and Opportunity, represent the state of the art in rover operations on Mars. This paper presents validation experiments of different visual tracking algorithms using the rover's navigation camera.
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
A unified dynamic neural field model of goal directed eye movements
NASA Astrophysics Data System (ADS)
Quinton, J. C.; Goffart, L.
2018-01-01
Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.
Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features
NASA Astrophysics Data System (ADS)
Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique
2011-12-01
We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.
Drew, Trafton; Horowitz, Todd S.; Wolfe, Jeremy M.; Vogel, Edward K.
2015-01-01
In the attentive tracking task, observers track multiple objects as they move independently and unpredictably among visually identical distractors. Although a number of models of attentive tracking implicate visual working memory as the mechanism responsible for representing target locations, no study has ever directly compared the neural mechanisms of the two tasks. In the current set of experiments, we used electrophysiological recordings to delineate similarities and differences between the neural processing involved in working memory and attentive tracking. We found that the contralateral electrophysiological response to the two tasks was similarly sensitive to the number of items attended in both tasks but that there was also a unique contralateral negativity related to the process of monitoring target position during tracking. This signal was absent for periods of time during tracking tasks when objects briefly stopped moving. These results provide evidence that, during attentive tracking, the process of tracking target locations elicits an electrophysiological response that is distinct and dissociable from neural measures of the number of items being attended. PMID:21228175
The effects of tDCS upon sustained visual attention are dependent on cognitive load.
Roe, James M; Nesheim, Mathias; Mathiesen, Nina C; Moberget, Torgeir; Alnæs, Dag; Sneve, Markus H
2016-01-08
Transcranial Direct Current Stimulation (tDCS) modulates the excitability of neuronal responses and consequently can affect performance on a variety of cognitive tasks. However, the interaction between cognitive load and the effects of tDCS is currently not well-understood. We recorded the performance accuracy of participants on a bilateral multiple object tracking task while undergoing bilateral stimulation assumed to enhance (anodal) and decrease (cathodal) neuronal excitability. Stimulation was applied to the posterior parietal cortex (PPC), a region inferred to be at the centre of an attentional tracking network that shows load-dependent activation. 34 participants underwent three separate stimulation conditions across three days. Each subject received (1) left cathodal / right anodal PPC tDCS, (2) left anodal / right cathodal PPC tDCS, and (3) sham tDCS. The number of targets-to-be-tracked was also manipulated, giving a low (one target per visual field), medium (two targets per visual field) or high (three targets per visual field) tracking load condition. It was found that tracking performance at high attentional loads was significantly reduced in both stimulation conditions relative to sham, and this was apparent in both visual fields, regardless of the direction of polarity upon the brain's hemispheres. We interpret this as an interaction between cognitive load and tDCS, and suggest that tDCS may degrade attentional performance when cognitive networks become overtaxed and unable to compensate as a result. Systematically varying cognitive load may therefore be a fruitful direction to elucidate the effects of tDCS upon cognitive functions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Lewis, Steven J.; Palacios, David M.
2013-01-01
This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).
NASA Astrophysics Data System (ADS)
Yang, Hua; Zhong, Donghong; Liu, Chenyi; Song, Kaiyou; Yin, Zhouping
2018-03-01
Object tracking is still a challenging problem in computer vision, as it entails learning an effective model to account for appearance changes caused by occlusion, out of view, plane rotation, scale change, and background clutter. This paper proposes a robust visual tracking algorithm called deep convolutional neural network (DCNNCT) to simultaneously address these challenges. The proposed DCNNCT algorithm utilizes a DCNN to extract the image feature of a tracked target, and the full range of information regarding each convolutional layer is used to express the image feature. Subsequently, the kernelized correlation filters (CF) in each convolutional layer are adaptively learned, the correlation response maps of that are combined to estimate the location of the tracked target. To avoid the case of tracking failure, an online random ferns classifier is employed to redetect the tracked target, and a dual-threshold scheme is used to obtain the final target location by comparing the tracking result with the detection result. Finally, the change in scale of the target is determined by building scale pyramids and training a CF. Extensive experiments demonstrate that the proposed algorithm is effective at tracking, especially when evaluated using an index called the overlap rate. The DCNNCT algorithm is also highly competitive in terms of robustness with respect to state-of-the-art trackers in various challenging scenarios.
B-spline based image tracking by detection
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman
2016-05-01
Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.
Li, Yuankun; Xu, Tingfa; Deng, Honggao; Shi, Guokai; Guo, Jie
2018-02-23
Although correlation filter (CF)-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN) to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.
Temporal Restricted Visual Tracking Via Reverse-Low-Rank Sparse Learning.
Yang, Yehui; Hu, Wenrui; Xie, Yuan; Zhang, Wensheng; Zhang, Tianzhu
2017-02-01
An effective representation model, which aims to mine the most meaningful information in the data, plays an important role in visual tracking. Some recent particle-filter-based trackers achieve promising results by introducing the low-rank assumption into the representation model. However, their assumed low-rank structure of candidates limits the robustness when facing severe challenges such as abrupt motion. To avoid the above limitation, we propose a temporal restricted reverse-low-rank learning algorithm for visual tracking with the following advantages: 1) the reverse-low-rank model jointly represents target and background templates via candidates, which exploits the low-rank structure among consecutive target observations and enforces the temporal consistency of target in a global level; 2) the appearance consistency may be broken when target suffers from sudden changes. To overcome this issue, we propose a local constraint via l 1,2 mixed-norm, which can not only ensures the local consistency of target appearance, but also tolerates the sudden changes between two adjacent frames; and 3) to alleviate the inference of unreasonable representation values due to outlier candidates, an adaptive weighted scheme is designed to improve the robustness of the tracker. By evaluating on 26 challenge video sequences, the experiments show the effectiveness and favorable performance of the proposed algorithm against 12 state-of-the-art visual trackers.
Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.
Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W
2016-12-14
The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.
3D Visual Tracking of an Articulated Robot in Precision Automated Tasks
Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.
2017-01-01
The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860
2011-01-01
Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated with controlling an affected arm make the motor system more prone to slack when distracted. Providing an alternate sensory channel for feedback, i.e., auditory feedback of tracking error, enabled the participants to simultaneously perform the tracking task and distracter task effectively. Thus, incorporating real-time auditory feedback of performance errors might improve clinical outcomes of robotic therapy systems. PMID:21513561
High-performance object tracking and fixation with an online neural estimator.
Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian
2007-02-01
Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.
Visual attention is required for multiple object tracking.
Tran, Annie; Hoffman, James E
2016-12-01
In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Ocular dynamics and visual tracking performance after Q-switched laser exposure
NASA Astrophysics Data System (ADS)
Zwick, Harry; Stuck, Bruce E.; Lund, David J.; Nawim, Maqsood
2001-05-01
In previous investigations of q-switched laser retinal exposure in awake task oriented non-human primates (NHPs), the threshold for retinal damage occurred well below that of the threshold for permanent visual function loss. Visual function measures used in these studies involved measures of visual acuity and contrast sensitivity. In the present study, we examine the same relationship for q-switched laser exposure using a visual performance task, where task dependency involves more parafoveal than foveal retina. NHPs were trained on a visual pursuit motor tracking performance task that required maintaining a small HeNe laser spot (0.3 degrees) centered in a slowly moving (0.5deg/sec) annulus. When NHPs reliably produced visual target tracking efficiencies > 80%, single q-switched laser exposures (7 nsec) were made coaxially with the line of sight of the moving target. An infrared camera imaged the pupil during exposure to obtain the pupillary response to the laser flash. Retinal images were obtained with a scanning laser ophthalmoscope 3 days post exposure under ketamine and nembutol anesthesia. Q-switched visible laser exposures at twice the damage threshold produced small (about 50mm) retinal lesions temporal to the fovea; deficits in NHP visual pursuit tracking were transient, demonstrating full recovery to baseline within a single tracking session. Post exposure analysis of the pupillary response demonstrated that the exposure flash entered the pupil, followed by 90 msec refractory period and than a 12 % pupillary contraction within 1.5 sec from the onset of laser exposure. At 6 times the morphological threshold damage level for 532 nm q-switched exposure, longer term losses in NHP pursuit tracking performance were observed. In summary, q-switched laser exposure appears to have a higher threshold for permanent visual performance loss than the corresponding threshold to produce retinal threshold injury. Mechanisms of neural plasticity within the retina and at higher visual brain centers may mediat
Holcombe, Alex O; Chen, Wei-Ying
2013-01-09
Overall performance when tracking moving targets is known to be poorer for larger numbers of targets, but the specific effect on tracking's temporal resolution has never been investigated. We document a broad range of display parameters for which visual tracking is limited by temporal frequency (the interval between when a target is at each location and a distracter moves in and replaces it) rather than by object speed. We tested tracking of one, two, and three moving targets while the eyes remained fixed. Variation of the number of distracters and their speed revealed both speed limits and temporal frequency limits on tracking. The temporal frequency limit fell from 7 Hz with one target to 4 Hz with two targets and 2.6 Hz with three targets. The large size of this performance decrease implies that in the two-target condition participants would have done better by tracking only one of the two targets and ignoring the other. These effects are predicted by serial models involving a single tracking focus that must switch among the targets, sampling the position of only one target at a time. If parallel processing theories are to explain why dividing the tracking resource reduces temporal resolution so markedly, supplemental assumptions will be required.
Reading color barcodes using visual snakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaub, Hanspeter
2004-05-01
Statistical pressure snakes are used to track a mono-color target in an unstructured environment using a video camera. The report discusses an algorithm to extract a bar code signal that is embedded within the target. The target is assumed to be rectangular in shape, with the bar code printed in a slightly different saturation and value in HSV color space. Thus, the visual snake, which primarily weighs hue tracking errors, will not be deterred by the presence of the color bar codes in the target. The bar code is generate with the standard 3 of 9 method. Using this method,more » the numeric bar codes reveal if the target is right-side-up or up-side-down.« less
Exploring What’s Missing: What Do Target Absent Trials Reveal About Autism Search Superiority?
Keehn, Brandon; Joseph, Robert M.
2016-01-01
We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of discrimination or selection. Rather, consistent with prior ASD research, group differences were mainly the effect of faster performance on target-absent trials. Eye-tracking revealed a lack of left-visual-field search asymmetry in ASD, which may confer an additional advantage when the target is absent. Lastly, ASD symptomatology was positively associated with search superiority, the mechanisms of which may shed light on the atypical brain organization that underlies social-communicative impairment in ASD. PMID:26762114
Sarlegna, Fabrice R; Baud-Bovy, Gabriel; Danion, Frédéric
2010-08-01
When we manipulate an object, grip force is adjusted in anticipation of the mechanical consequences of hand motion (i.e., load force) to prevent the object from slipping. This predictive behavior is assumed to rely on an internal representation of the object dynamic properties, which would be elaborated via visual information before the object is grasped and via somatosensory feedback once the object is grasped. Here we examined this view by investigating the effect of delayed visual feedback during dextrous object manipulation. Adult participants manually tracked a sinusoidal target by oscillating a handheld object whose current position was displayed as a cursor on a screen along with the visual target. A delay was introduced between actual object displacement and cursor motion. This delay was linearly increased (from 0 to 300 ms) and decreased within 2-min trials. As previously reported, delayed visual feedback altered performance in manual tracking. Importantly, although the physical properties of the object remained unchanged, delayed visual feedback altered the timing of grip force relative to load force by about 50 ms. Additional experiments showed that this effect was not due to task complexity nor to manual tracking. A model inspired by the behavior of mass-spring systems suggests that delayed visual feedback may have biased the representation of object dynamics. Overall, our findings support the idea that visual feedback of object motion can influence the predictive control of grip force even when the object is grasped.
Tracking, aiming, and hitting the UAV with ordinary assault rifle
NASA Astrophysics Data System (ADS)
Racek, František; Baláž, Teodor; Krejčí, Jaroslav; Procházka, Stanislav; Macko, Martin
2017-10-01
The usage small-unmanned aerial vehicles (UAVs) is significantly increasing nowadays. They are being used as a carrier of military spy and reconnaissance devices (taking photos, live video streaming and so on), or as a carrier of potentially dangerous cargo (intended for destruction and killing). Both ways of utilizing the UAV cause the necessity to disable it. From the military point of view, to disable the UAV means to bring it down by a weapon of an ordinary soldier that is the assault rifle. This task can be challenging for the soldier because he needs visually detect and identify the target, track the target visually and aim on the target. The final success of the soldier's mission depends not only on the said visual tasks, but also on the properties of the weapon and ammunition. The paper deals with possible methods of prediction of probability of hitting the UAV targets.
A comparison study of visually stimulated brain-computer and eye-tracking interfaces
NASA Astrophysics Data System (ADS)
Suefusa, Kaori; Tanaka, Toshihisa
2017-06-01
Objective. Brain-computer interfacing (BCI) based on visual stimuli detects the target on a screen on which a user is focusing. The detection of the gazing target can be achieved by tracking gaze positions with a video camera, which is called eye-tracking or eye-tracking interfaces (ETIs). The two types of interface have been developed in different communities. Thus, little work on a comprehensive comparison between these two types of interface has been reported. This paper quantitatively compares the performance of these two interfaces on the same experimental platform. Specifically, our study is focused on two major paradigms of BCI and ETI: steady-state visual evoked potential-based BCIs and dwelling-based ETIs. Approach. Recognition accuracy and the information transfer rate were measured by giving subjects the task of selecting one of four targets by gazing at it. The targets were displayed in three different sizes (with sides 20, 40 and 60 mm long) to evaluate performance with respect to the target size. Main results. The experimental results showed that the BCI was comparable to the ETI in terms of accuracy and the information transfer rate. In particular, when the size of a target was relatively small, the BCI had significantly better performance than the ETI. Significance. The results on which of the two interfaces works better in different situations would not only enable us to improve the design of the interfaces but would also allow for the appropriate choice of interface based on the situation. Specifically, one can choose an interface based on the size of the screen that displays the targets.
Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.
Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E
2013-08-01
Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.
Accounting for direction and speed of eye motion in planning visually guided manual tracking.
Leclercq, Guillaume; Blohm, Gunnar; Lefèvre, Philippe
2013-10-01
Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.
Modulation of high-frequency vestibuloocular reflex during visual tracking in humans
NASA Technical Reports Server (NTRS)
Das, V. E.; Leigh, R. J.; Thomas, C. W.; Averbuch-Heller, L.; Zivotofsky, A. Z.; Discenna, A. O.; Dell'Osso, L. F.
1995-01-01
1. Humans may visually track a moving object either when they are stationary or in motion. To investigate visual-vestibular interaction during both conditions, we compared horizontal smooth pursuit (SP) and active combined eye-head tracking (CEHT) of a target moving sinusoidally at 0.4 Hz in four normal subjects while the subjects were either stationary or vibrated in yaw at 2.8 Hz. We also measured the visually enhanced vestibuloocular reflex (VVOR) during vibration in yaw at 2.8 Hz over a peak head velocity range of 5-40 degrees/s. 2. We found that the gain of the VVOR at 2.8 Hz increased in all four subjects as peak head velocity increased (P < 0.001), with minimal phase changes, such that mean retinal image slip was held below 5 degrees/s. However, no corresponding modulation in vestibuloocular reflex gain occurred with increasing peak head velocity during a control condition when subjects were rotated in darkness. 3. During both horizontal SP and CEHT, tracking gains were similar, and the mean slip speed of the target's image on the retina was held below 5.5 degrees/s whether subjects were stationary or being vibrated at 2.8 Hz. During both horizontal SP and CEHT of target motion at 0.4 Hz, while subjects were vibrated in yaw, VVOR gain for the 2.8-Hz head rotations was similar to or higher than that achieved during fixation of a stationary target. This is in contrast to the decrease of VVOR gain that is reported while stationary subjects perform CEHT.(ABSTRACT TRUNCATED AT 250 WORDS).
Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin
2017-11-01
Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.
Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Srihasam, Krishna; Bullock, Daniel; Grossberg, Stephen
2009-01-01
Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do…
NASA Technical Reports Server (NTRS)
Grant, Michael P.; Leigh, R. John; Seidman, Scott H.; Riley, David E.; Hanna, Joseph P.
1992-01-01
We compared the ability of eight normal subjects and 15 patients with brainstem or cerebellar disease to follow a moving visual stimulus smoothly with either the eyes alone or with combined eye-head tracking. The visual stimulus was either a laser spot (horizontal and vertical planes) or a large rotating disc (torsional plane), which moved at one sinusoidal frequency for each subject. The visually enhanced Vestibulo-Ocular Reflex (VOR) was also measured in each plane. In the horizontal and vertical planes, we found that if tracking gain (gaze velocity/target velocity) for smooth pursuit was close to 1, the gain of combined eye-hand tracking was similar. If the tracking gain during smooth pursuit was less than about 0.7, combined eye-head tracking was usually superior. Most patients, irrespective of diagnosis, showed combined eye-head tracking that was superior to smooth pursuit; only two patients showed the converse. In the torsional plane, in which optokinetic responses were weak, combined eye-head tracking was much superior, and this was the case in both subjects and patients. We found that a linear model, in which an internal ocular tracking signal cancelled the VOR, could account for our findings in most normal subjects in the horizontal and vertical planes, but not in the torsional plane. The model failed to account for tracking behaviour in most patients in any plane, and suggested that the brain may use additional mechanisms to reduce the internal gain of the VOR during combined eye-head tracking. Our results confirm that certain patients who show impairment of smooth-pursuit eye movements preserve their ability to smoothly track a moving target with combined eye-head tracking.
Real-time tracking of visually attended objects in virtual environments and its application to LOD.
Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon
2009-01-01
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.
A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-01-01
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331
A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.
Pan, Shaowu; Shi, Liwei; Guo, Shuxiang
2015-04-08
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.
Visual object tracking by correlation filters and online learning
NASA Astrophysics Data System (ADS)
Zhang, Xin; Xia, Gui-Song; Lu, Qikai; Shen, Weiming; Zhang, Liangpei
2018-06-01
Due to the complexity of background scenarios and the variation of target appearance, it is difficult to achieve high accuracy and fast speed for object tracking. Currently, correlation filters based trackers (CFTs) show promising performance in object tracking. The CFTs estimate the target's position by correlation filters with different kinds of features. However, most of CFTs can hardly re-detect the target in the case of long-term tracking drifts. In this paper, a feature integration object tracker named correlation filters and online learning (CFOL) is proposed. CFOL estimates the target's position and its corresponding correlation score using the same discriminative correlation filter with multi-features. To reduce tracking drifts, a new sampling and updating strategy for online learning is proposed. Experiments conducted on 51 image sequences demonstrate that the proposed algorithm is superior to the state-of-the-art approaches.
Domkin, Dmitry; Forsman, Mikael; Richter, Hans O
2016-06-01
Previous studies have shown an association of visual demands during near work and increased activity of the trapezius muscle. Those studies were conducted under stationary postural conditions with fixed gaze and artificial visual load. The present study investigated the relationship between ciliary muscle contraction force and trapezius muscle activity across individuals during performance of a natural dynamic motor task under free gaze conditions. Participants (N=11) tracked a moving visual target with a digital pen on a computer screen. Tracking performance, eye refraction and trapezius muscle activity were continuously measured. Ciliary muscle contraction force was computed from eye accommodative response. There was a significant Pearson correlation between ciliary muscle contraction force and trapezius muscle activity on the tracking side (0.78, p<0.01) and passive side (0.64, p<0.05). The study supports the hypothesis that high visual demands, leading to an increased ciliary muscle contraction during continuous eye-hand coordination, may increase trapezius muscle tension and thus contribute to the development of musculoskeletal complaints in the neck-shoulder area. Further experimental studies are required to clarify whether the relationship is valid within each individual or may represent a general personal trait, when individuals with higher eye accommodative response tend to have higher trapezius muscle activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara
2016-01-01
The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…
Interactive target tracking for persistent wide-area surveillance
NASA Astrophysics Data System (ADS)
Ersoy, Ilker; Palaniappan, Kannappan; Seetharaman, Guna S.; Rao, Raghuveer M.
2012-06-01
Persistent aerial surveillance is an emerging technology that can provide continuous, wide-area coverage from an aircraft-based multiple-camera system. Tracking targets in these data sets is challenging for vision algorithms due to large data (several terabytes), very low frame rate, changing viewpoint, strong parallax and other imperfections due to registration and projection. Providing an interactive system for automated target tracking also has additional challenges that require online algorithms that are seamlessly integrated with interactive visualization tools to assist the user. We developed an algorithm that overcomes these challenges and demonstrated it on data obtained from a wide-area imaging platform.
What triggers catch-up saccades during visual tracking?
de Brouwer, Sophie; Yuksel, Demet; Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe
2002-03-01
When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).
Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking
Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang
2015-01-01
Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715
Online multi-modal robust non-negative dictionary learning for visual tracking.
Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang
2015-01-01
Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.
Correlation Filter Learning Toward Peak Strength for Visual Tracking.
Sui, Yao; Wang, Guanghui; Zhang, Li
2018-04-01
This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.
Robust visual tracking via multiscale deep sparse networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo
2017-04-01
In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.
Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search
Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.
2017-01-01
In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073
A model for combined targeting and tracking tasks in computer applications.
Senanayake, Ransalu; Hoffmann, Errol R; Goonetilleke, Ravindra S
2013-11-01
Current models for targeted-tracking are discussed and shown to be inadequate as a means of understanding the combined task of tracking, as in the Drury's paradigm, and having a final target to be aimed at, as in the Fitts' paradigm. It is shown that the task has to be split into components that are, in general, performed sequentially and have a movement time component dependent on the difficulty of the individual component of the task. In some cases, the task time may be controlled by the Fitts' task difficulty, and in others, it may be dominated by the Drury's task difficulty. Based on an experiment carried out that captured movement time in combinations of visually controlled and ballistic movements, a model for movement time in targeted-tracking was developed.
Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters
Zhang, Sirou; Qiao, Xiaoya
2017-01-01
In recent years, visual object tracking has been widely used in military guidance, human-computer interaction, road traffic, scene monitoring and many other fields. The tracking algorithms based on correlation filters have shown good performance in terms of accuracy and tracking speed. However, their performance is not satisfactory in scenes with scale variation, deformation, and occlusion. In this paper, we propose a scene-aware adaptive updating mechanism for visual tracking via a kernel correlation filter (KCF). First, a low complexity scale estimation method is presented, in which the corresponding weight in five scales is employed to determine the final target scale. Then, the adaptive updating mechanism is presented based on the scene-classification. We classify the video scenes as four categories by video content analysis. According to the target scene, we exploit the adaptive updating mechanism to update the kernel correlation filter to improve the robustness of the tracker, especially in scenes with scale variation, deformation, and occlusion. We evaluate our tracker on the CVPR2013 benchmark. The experimental results obtained with the proposed algorithm are improved by 33.3%, 15%, 6%, 21.9% and 19.8% compared to those of the KCF tracker on the scene with scale variation, partial or long-time large-area occlusion, deformation, fast motion and out-of-view. PMID:29140311
Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments
NASA Technical Reports Server (NTRS)
Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi
1994-01-01
Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
Alvarez, George A; Gill, Jonathan; Cavanagh, Patrick
2012-01-01
Previous studies have shown independent attentional selection of targets in the left and right visual hemifields during attentional tracking (Alvarez & Cavanagh, 2005) but not during a visual search (Luck, Hillyard, Mangun, & Gazzaniga, 1989). Here we tested whether multifocal spatial attention is the critical process that operates independently in the two hemifields. It is explicitly required in tracking (attend to a subset of object locations, suppress the others) but not in the standard visual search task (where all items are potential targets). We used a modified visual search task in which observers searched for a target within a subset of display items, where the subset was selected based on location (Experiments 1 and 3A) or based on a salient feature difference (Experiments 2 and 3B). The results show hemifield independence in this subset visual search task with location-based selection but not with feature-based selection; this effect cannot be explained by general difficulty (Experiment 4). Combined, these findings suggest that hemifield independence is a signature of multifocal spatial attention and highlight the need for cognitive and neural theories of attention to account for anatomical constraints on selection mechanisms. PMID:22637710
NASA Astrophysics Data System (ADS)
Bo, Nyan Bo; Deboeverie, Francis; Veelaert, Peter; Philips, Wilfried
2017-09-01
Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera's observation of a particular target by taking into account both person-person and object-person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.
King, Adam C; Newell, Karl M
2015-10-01
The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.
Real-time classification of vehicles by type within infrared imagery
NASA Astrophysics Data System (ADS)
Kundegorski, Mikolaj E.; Akçay, Samet; Payen de La Garanderie, Grégoire; Breckon, Toby P.
2016-10-01
Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.
NASA Astrophysics Data System (ADS)
Linte, Cristian A.; Rettmann, Maryam E.; Dilger, Ben; Gunawan, Mia S.; Arunachalam, Shivaram P.; Holmes, David R., III; Packer, Douglas L.; Robb, Richard A.
2012-02-01
The novel prototype system for advanced visualization for image-guided left atrial ablation therapy developed in our laboratory permits ready integration of multiple imaging modalities, surgical instrument tracking, interventional devices and electro-physiologic data. This technology allows subject-specific procedure planning and guidance using 3D dynamic, patient-specific models of the patient's heart, augmented with real-time intracardiac echocardiography (ICE). In order for the 2D ICE images to provide intuitive visualization for accurate catheter to surgical target navigation, the transducer must be tracked, so that the acquired images can be appropriately presented with respect to the patient-specific anatomy. Here we present the implementation of a previously developed ultrasound calibration technique for a magnetically tracked ICE transducer, along with a series of evaluation methods to ensure accurate imaging and faithful representation of the imaged structures. Using an engineering-designed phantom, target localization accuracy is assessed by comparing known target locations with their transformed locations inferred from the tracked US images. In addition, the 3D volume reconstruction accuracy is also estimated by comparing a truth volume to that reconstructed from sequential 2D US images. Clinically emulating validation studies are conducted using a patient-specific left atrial phantom. Target localization error of clinically-relevant surgical targets represented by nylon fiducials implanted within the endocardial wall of the phantom was assessed. Our studies have demonstrated 2.4 +/- 0.8 mm target localization error in the engineering-designed evaluation phantoms, 94.8 +/- 4.6 % volume reconstruction accuracy, and 3.1 +/- 1.2 mm target localization error in the left atrial-mimicking phantom. These results are consistent with those disseminated in the literature and also with the accuracy constraints imposed by the employed technology and the clinical application.
Spatial attention during saccade decisions.
Jonikaitis, Donatas; Klapetek, Anna; Deubel, Heiner
2017-07-01
Behavioral measures of decision making are usually limited to observations of decision outcomes. In the present study, we made use of the fact that oculomotor and sensory selection are closely linked to track oculomotor decision making before oculomotor responses are made. We asked participants to make a saccadic eye movement to one of two memorized target locations and observed that visual sensitivity increased at both the chosen and the nonchosen saccade target locations, with a clear bias toward the chosen target. The time course of changes in visual sensitivity was related to saccadic latency, with the competition between the chosen and nonchosen targets resolved faster before short-latency saccades. On error trials, we observed an increased competition between the chosen and nonchosen targets. Moreover, oculomotor selection and visual sensitivity were influenced by top-down and bottom-up factors as well as by selection history and predicted the direction of saccades. Our findings demonstrate that saccade decisions have direct visual consequences and show that decision making can be traced in the human oculomotor system well before choices are made. Our results also indicate a strong association between decision making, saccade target selection, and visual sensitivity. NEW & NOTEWORTHY We show that saccadic decisions can be tracked by measuring spatial attention. Spatial attention is allocated in parallel to the two competing saccade targets, and the time course of spatial attention differs for fast-slow and for correct-erroneous decisions. Saccade decisions take the form of a competition between potential saccade goals, which is associated with spatial attention allocation to those locations. Copyright © 2017 the American Physiological Society.
Action-Driven Visual Object Tracking With Deep Reinforcement Learning.
Yun, Sangdoo; Choi, Jongwon; Yoo, Youngjoon; Yun, Kimin; Choi, Jin Young
2018-06-01
In this paper, we propose an efficient visual tracker, which directly captures a bounding box containing the target object in a video by means of sequential actions learned using deep neural networks. The proposed deep neural network to control tracking actions is pretrained using various training video sequences and fine-tuned during actual tracking for online adaptation to a change of target and background. The pretraining is done by utilizing deep reinforcement learning (RL) as well as supervised learning. The use of RL enables even partially labeled data to be successfully utilized for semisupervised learning. Through the evaluation of the object tracking benchmark data set, the proposed tracker is validated to achieve a competitive performance at three times the speed of existing deep network-based trackers. The fast version of the proposed method, which operates in real time on graphics processing unit, outperforms the state-of-the-art real-time trackers with an accuracy improvement of more than 8%.
Enhanced compressed sensing for visual target tracking in wireless visual sensor networks
NASA Astrophysics Data System (ADS)
Qiang, Guo
2017-11-01
Moving object tracking in wireless sensor networks (WSNs) has been widely applied in various fields. Designing low-power WSNs for the limited resources of the sensor, such as energy limitation, energy restriction, and bandwidth constraints, is of high priority. However, most existing works focus on only single conflicting optimization criteria. An efficient compressive sensing technique based on a customized memory gradient pursuit algorithm with early termination in WSNs is presented, which strikes compelling trade-offs among energy dissipation for wireless transmission, certain types of bandwidth, and minimum storage. Then, the proposed approach adopts an unscented particle filter to predict the location of the target. The experimental results with a theoretical analysis demonstrate the substantially superior effectiveness of the proposed model and framework in regard to the energy and speed under the resource limitation of a visual sensor node.
Real-time visual target tracking: two implementations of velocity-based smooth pursuit
NASA Astrophysics Data System (ADS)
Etienne-Cummings, Ralph; Longo, Paul; Van der Spiegel, Jan; Mueller, Paul
1995-06-01
Two systems for velocity-based visual target tracking are presented. The first two computational layers of both implementations are composed of VLSI photoreceptors (logarithmic compression) and edge detection (difference-of-Gaussians) arrays that mimic the outer-plexiform layer of mammalian retinas. The subsequent processing layers for measuring the target velocity and to realize smooth pursuit tracking are implemented in software and at the focal plane in the two versions, respectively. One implentation uses a hybrid of a PC and a silicon retina (39 X 38 pixels) operating at 333 frames/second. The software implementation of a real-time optical flow measurement algorithm is used to determine the target velocity, and a closed-loop control system zeroes the relative velocity of the target and retina. The second implementation is a single VLSI chip, which contains a linear array of photoreceptors, edge detectors and motion detectors at the focal plane. The closed-loop control system is also included on chip. This chip realizes all the computational properties of the hybrid system. The effects of background motion, target occlusion, and disappearance are studied as a function of retinal size and spatial distribution of the measured motion vectors (i.e. foveal/peripheral and diverging/converging measurement schemes). The hybrid system, which tested successfully, tracks targets moving as fast as 3 m/s at 1.3 meters from the camera and it can compensate for external arbitrary movements in its mounting platform. The single chip version, whose circuits tested successfully, can handle targets moving at 10 m/s.
The functional consequences of social distraction: Attention and memory for complex scenes.
Doherty, Brianna Ruth; Patai, Eva Zita; Duta, Mihaela; Nobre, Anna Christina; Scerif, Gaia
2017-01-01
Cognitive scientists have long proposed that social stimuli attract visual attention even when task irrelevant, but the consequences of this privileged status for memory are unknown. To address this, we combined computational approaches, eye-tracking methodology, and individual-differences measures. Participants searched for targets in scenes containing social or non-social distractors equated for low-level visual salience. Subsequent memory precision for target locations was tested. Individual differences in autistic traits and social anxiety were also measured. Eye-tracking revealed significantly more attentional capture to social compared to non-social distractors. Critically, memory precision for target locations was poorer for social scenes. This effect was moderated by social anxiety, with anxious individuals remembering target locations better under conditions of social distraction. These findings shed further light onto the privileged attentional status of social stimuli and its functional consequences on memory across individuals. Copyright © 2016. Published by Elsevier B.V.
Multiple Object Tracking Reveals Object-Based Grouping Interference in Children with ASD
ERIC Educational Resources Information Center
Van der Hallen, Ruth; Evers, Kris; de-Wit, Lee; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2018-01-01
The multiple object tracking (MOT) paradigm has proven its value in targeting a number of aspects of visual cognition. This study used MOT to investigate the effect of object-based grouping, both in children with and without autism spectrum disorder (ASD). A modified MOT task was administered to both groups, who had to track and distinguish four…
Exhausting Attentional Tracking Resources with a Single Fast-Moving Object
ERIC Educational Resources Information Center
Holcombe, Alex O.; Chen, Wei-Ying
2012-01-01
Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional…
A framework for small infrared target real-time visual enhancement
NASA Astrophysics Data System (ADS)
Sun, Xiaoliang; Long, Gucan; Shang, Yang; Liu, Xiaolin
2015-03-01
This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible.
Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model
Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal
2016-01-01
In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769
Contrast and assimilation in motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-09-01
The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying
2014-07-01
Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.
Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco
2015-02-01
Manual interceptions are known to depend critically on integration of visual feedback information and experience-based predictions of the interceptive event. Within this framework, coupling between gaze and limb movements might also contribute to the interceptive outcome, since eye movements afford acquisition of high-resolution visual information. We investigated this issue by analyzing subjects' head-fixed oculomotor behavior during manual interceptions. Subjects moved a mouse cursor to intercept computer-generated ballistic trajectories either congruent with Earth's gravity or perturbed with weightlessness (0 g) or hypergravity (2 g) effects. In separate sessions, trajectories were either fully visible or occluded before interception to enforce visual prediction. Subjects' oculomotor behavior was classified in terms of amounts of time they gazed at different visual targets and of overall number of saccades. Then, by way of multivariate analyses, we assessed the following: (1) whether eye movement patterns depended on targets' laws of motion and occlusions; and (2) whether interceptive performance was related to the oculomotor behavior. First, we found that eye movement patterns depended significantly on targets' laws of motion and occlusion, suggesting predictive mechanisms. Second, subjects coupled differently oculomotor and interceptive behavior depending on whether targets were visible or occluded. With visible targets, subjects made smaller interceptive errors if they gazed longer at the mouse cursor. Instead, with occluded targets, they achieved better performance by increasing the target's tracking accuracy and by avoiding gaze shifts near interception, suggesting that precise ocular tracking provided better trajectory predictions for the interceptive response.
Binocular Vision-Based Position and Pose of Hand Detection and Tracking in Space
NASA Astrophysics Data System (ADS)
Jun, Chen; Wenjun, Hou; Qing, Sheng
After the study of image segmentation, CamShift target tracking algorithm and stereo vision model of space, an improved algorithm based of Frames Difference and a new space point positioning model were proposed, a binocular visual motion tracking system was constructed to verify the improved algorithm and the new model. The problem of the spatial location and pose of the hand detection and tracking have been solved.
Wästlund, Erik; Shams, Poja; Otterbring, Tobias
2018-01-01
In visual marketing, the truism that "unseen is unsold" means that products that are not noticed will not be sold. This truism rests on the idea that the consumer choice process is heavily influenced by visual search. However, given that the majority of available products are not seen by consumers, this article examines the role of peripheral vision in guiding attention during the consumer choice process. In two eye-tracking studies, one conducted in a lab facility and the other conducted in a supermarket, the authors investigate the role and limitations of peripheral vision. The results show that peripheral vision is used to direct visual attention when discriminating between target and non-target objects in an eye-tracking laboratory. Target and non-target similarity, as well as visual saliency of non-targets, constitute the boundary conditions for this effect, which generalizes from instruction-based laboratory tasks to preference-based choice tasks in a real supermarket setting. Thus, peripheral vision helps customers to devote a larger share of attention to relevant products during the consumer choice process. Taken together, the results show how the creation of consideration set (sets of possible choice options) relies on both goal-directed attention and peripheral vision. These results could explain how visually similar packaging positively influences market leaders, while making novel brands almost invisible on supermarket shelves. The findings show that even though unsold products might be unseen, in the sense that they have not been directly observed, they might still have been evaluated and excluded by means of peripheral vision. This article is based on controlled lab experiments as well as a field study conducted in a complex retail environment. Thus, the findings are valid both under controlled and ecologically valid conditions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Tracked 3D ultrasound in radio-frequency liver ablation
NASA Astrophysics Data System (ADS)
Boctor, Emad M.; Fichtinger, Gabor; Taylor, Russell H.; Choti, Michael A.
2003-05-01
Recent studies have shown that radio frequency (RF) ablation is a simple, safe and potentially effective treatment for selected patients with liver metastases. Despite all recent therapeutic advancements, however, intra-procedural target localization and precise and consistent placement of the tissue ablator device are still unsolved problems. Various imaging modalities, including ultrasound (US) and computed tomography (CT) have been tried as guidance modalities. Transcutaneous US imaging, due to its real-time nature, may be beneficial in many cases, but unfortunately, fails to adequately visualize the tumor in many cases. Intraoperative or laparoscopic US, on the other hand, provides improved visualization and target imaging. This paper describes a system for computer-assisted RF ablation of liver tumors, combining navigational tracking of a conventional imaging ultrasound probe to produce 3D ultrasound imaging with a tracked RF ablation device supported by a passive mechanical arm and spatially registered to the ultrasound volume.
Filling in the gaps: Anticipatory control of eye movements in chronic mild traumatic brain injury.
Diwakar, Mithun; Harrington, Deborah L; Maruta, Jun; Ghajar, Jamshid; El-Gabalawy, Fady; Muzzatti, Laura; Corbetta, Maurizio; Huang, Ming-Xiong; Lee, Roland R
2015-01-01
A barrier in the diagnosis of mild traumatic brain injury (mTBI) stems from the lack of measures that are adequately sensitive in detecting mild head injuries. MRI and CT are typically negative in mTBI patients with persistent symptoms of post-concussive syndrome (PCS), and characteristic difficulties in sustaining attention often go undetected on neuropsychological testing, which can be insensitive to momentary lapses in concentration. Conversely, visual tracking strongly depends on sustained attention over time and is impaired in chronic mTBI patients, especially when tracking an occluded target. This finding suggests deficient internal anticipatory control in mTBI, the neural underpinnings of which are poorly understood. The present study investigated the neuronal bases for deficient anticipatory control during visual tracking in 25 chronic mTBI patients with persistent PCS symptoms and 25 healthy control subjects. The task was performed while undergoing magnetoencephalography (MEG), which allowed us to examine whether neural dysfunction associated with anticipatory control deficits was due to altered alpha, beta, and/or gamma activity. Neuropsychological examinations characterized cognition in both groups. During MEG recordings, subjects tracked a predictably moving target that was either continuously visible or randomly occluded (gap condition). MEG source-imaging analyses tested for group differences in alpha, beta, and gamma frequency bands. The results showed executive functioning, information processing speed, and verbal memory deficits in the mTBI group. Visual tracking was impaired in the mTBI group only in the gap condition. Patients showed greater error than controls before and during target occlusion, and were slower to resynchronize with the target when it reappeared. Impaired tracking concurred with abnormal beta activity, which was suppressed in the parietal cortex, especially the right hemisphere, and enhanced in left caudate and frontal-temporal areas. Regional beta-amplitude demonstrated high classification accuracy (92%) compared to eye-tracking (65%) and neuropsychological variables (80%). These findings show that deficient internal anticipatory control in mTBI is associated with altered beta activity, which is remarkably sensitive given the heterogeneity of injuries.
Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.
Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong
2016-08-01
The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.
NASA Technical Reports Server (NTRS)
Oliver, B. M.; Gower, J. F. R.
1977-01-01
A data acquisition system using a Litton LTN-51 inertial navigation unit (INU) was tested and used for aircraft track recovery and for location and tracking from the air of targets at sea. The characteristic position drift of the INU is compensated for by sighting landmarks of accurately known position at discrete time intervals using a visual sighting system in the transparent nose of the Beechcraft 18 aircraft used. For an aircraft altitude of about 300 m, theoretical and experimental tests indicate that calculated aircraft and/or target positions obtained from the interpolated INU drift curve will be accurate to within 10 m for landmarks spaced approximately every 15 minutes in time. For applications in coastal oceanography, such as surface current mapping by tracking artificial targets, the system allows a broad area to be covered without use of high altitude photography and its attendant needs for large targets and clear weather.
Seo, Joonho; Koizumi, Norihiro; Funamoto, Takakazu; Sugita, Naohiko; Yoshinaka, Kiyoshi; Nomiya, Akira; Homma, Yukio; Matsumoto, Yoichiro; Mitsuishi, Mamoru
2011-06-01
Applying ultrasound (US)-guided high-intensity focused ultrasound (HIFU) therapy for kidney tumours is currently very difficult, due to the unclearly observed tumour area and renal motion induced by human respiration. In this research, we propose new methods by which to track the indistinct tumour area and to compensate the respiratory tumour motion for US-guided HIFU treatment. For tracking indistinct tumour areas, we detect the US speckle change created by HIFU irradiation. In other words, HIFU thermal ablation can coagulate tissue in the tumour area and an intraoperatively created coagulated lesion (CL) is used as a spatial landmark for US visual tracking. Specifically, the condensation algorithm was applied to robust and real-time CL speckle pattern tracking in the sequence of US images. Moreover, biplanar US imaging was used to locate the three-dimensional position of the CL, and a three-actuator system drives the end-effector to compensate for the motion. Finally, we tested the proposed method by using a newly devised phantom model that enables both visual tracking and a thermal response by HIFU irradiation. In the experiment, after generation of the CL in the phantom kidney, the end-effector successfully synchronized with the phantom motion, which was modelled by the captured motion data for the human kidney. The accuracy of the motion compensation was evaluated by the error between the end-effector and the respiratory motion, the RMS error of which was approximately 2 mm. This research shows that a HIFU-induced CL provides a very good landmark for target motion tracking. By using the CL tracking method, target motion compensation can be realized in the US-guided robotic HIFU system. Copyright © 2011 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
The contributions of visual and central attention to visual working memory.
Souza, Alessandra S; Oberauer, Klaus
2017-10-01
We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.
NASA Technical Reports Server (NTRS)
Krauzlis, Rich; Stone, Leland; Null, Cynthia H. (Technical Monitor)
1998-01-01
When viewing objects, primates use a combination of saccadic and pursuit eye movements to stabilize the retinal image of the object of regard within the high-acuity region near the fovea. Although these movements involve widespread regions of the nervous system, they mix seamlessly in normal behavior. Saccades are discrete movements that quickly direct the eyes toward a visual target, thereby translating the image of the target from an eccentric retinal location to the fovea. In contrast, pursuit is a continuous movement that slowly rotates the eyes to compensate for the motion of the visual target, minimizing the blur that can compromise visual acuity. While other mammalian species can generate smooth optokinetic eye movements - which track the motion of the entire visual surround - only primates can smoothly pursue a single small element within a complex visual scene, regardless of the motion elsewhere on the retina. This ability likely reflects the greater ability of primates to segment the visual scene, to identify individual visual objects, and to select a target of interest.
Shape and texture fused recognition of flying targets
NASA Astrophysics Data System (ADS)
Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás
2011-06-01
This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).
Improvement in Visual Target Tracking for a Mobile Robot
NASA Technical Reports Server (NTRS)
Kim, Won; Ansar, Adnan; Madison, Richard
2006-01-01
In an improvement of the visual-target-tracking software used aboard a mobile robot (rover) of the type used to explore the Martian surface, an affine-matching algorithm has been replaced by a combination of a normalized- cross-correlation (NCC) algorithm and a template-image-magnification algorithm. Although neither NCC nor template-image magnification is new, the use of both of them to increase the degree of reliability with which features can be matched is new. In operation, a template image of a target is obtained from a previous rover position, then the magnification of the template image is based on the estimated change in the target distance from the previous rover position to the current rover position (see figure). For this purpose, the target distance at the previous rover position is determined by stereoscopy, while the target distance at the current rover position is calculated from an estimate of the current pose of the rover. The template image is then magnified by an amount corresponding to the estimated target distance to obtain a best template image to match with the image acquired at the current rover position.
Multiple feature fusion via covariance matrix for visual tracking
NASA Astrophysics Data System (ADS)
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
Man-in-the-loop study of filtering in airborne head tracking tasks
NASA Technical Reports Server (NTRS)
Lifshitz, S.; Merhav, S. J.
1992-01-01
A human-factors study is conducted of problems due to vibrations during the use of a helmet-mounted display (HMD) in tracking tasks whose major factors are target motion and head vibration. A method is proposed for improving aiming accuracy in such tracking tasks on the basis of (1) head-motion measurement and (2) the shifting of the reticle in the HMD in ways that inhibit much of the involuntary apparent motion of the reticle, relative to the target, and the nonvoluntary motion of the teleoperated device. The HMD inherently furnishes the visual feedback required by this scheme.
Motivation and short-term memory in visual search: Attention's accelerator revisited.
Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton
2018-05-01
A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Particle Filtering with Region-based Matching for Tracking of Partially Occluded and Scaled Targets*
Nakhmani, Arie; Tannenbaum, Allen
2012-01-01
Visual tracking of arbitrary targets in clutter is important for a wide range of military and civilian applications. We propose a general framework for the tracking of scaled and partially occluded targets, which do not necessarily have prominent features. The algorithm proposed in the present paper utilizes a modified normalized cross-correlation as the likelihood for a particle filter. The algorithm divides the template, selected by the user in the first video frame, into numerous patches. The matching process of these patches by particle filtering allows one to handle the target’s occlusions and scaling. Experimental results with fixed rectangular templates show that the method is reliable for videos with nonstationary, noisy, and cluttered background, and provides accurate trajectories in cases of target translation, scaling, and occlusion. PMID:22506088
Attention Modulates Spatial Precision in Multiple-Object Tracking.
Srivastava, Nisheeth; Vul, Ed
2016-01-01
We present a computational model of multiple-object tracking that makes trial-level predictions about the allocation of visual attention and the effect of this allocation on observers' ability to track multiple objects simultaneously. This model follows the intuition that increased attention to a location increases the spatial resolution of its internal representation. Using a combination of empirical and computational experiments, we demonstrate the existence of a tight coupling between cognitive and perceptual resources in this task: Low-level tracking of objects generates bottom-up predictions of error likelihood, and high-level attention allocation selectively reduces error probabilities in attended locations while increasing it at non-attended locations. Whereas earlier models of multiple-object tracking have predicted the big picture relationship between stimulus complexity and response accuracy, our approach makes accurate predictions of both the macro-scale effect of target number and velocity on tracking difficulty and micro-scale variations in difficulty across individual trials and targets arising from the idiosyncratic within-trial interactions of targets and distractors. Copyright © 2016 Cognitive Science Society, Inc.
Effects of reward on the accuracy and dynamics of smooth pursuit eye movements.
Brielmann, Aenne A; Spering, Miriam
2015-08-01
Reward modulates behavioral choices and biases goal-oriented behavior, such as eye or hand movements, toward locations or stimuli associated with higher rewards. We investigated reward effects on the accuracy and timing of smooth pursuit eye movements in 4 experiments. Eye movements were recorded in participants tracking a moving visual target on a computer monitor. Before target motion onset, a monetary reward cue indicated whether participants could earn money by tracking accurately, or whether the trial was unrewarded (Experiments 1 and 2, n = 11 each). Reward significantly improved eye-movement accuracy across different levels of task difficulty. Improvements were seen even in the earliest phase of the eye movement, within 70 ms of tracking onset, indicating that reward impacts visual-motor processing at an early level. We obtained similar findings when reward was not precued but explicitly associated with the pursuit target (Experiment 3, n = 16); critically, these results were not driven by stimulus prevalence or other factors such as preparation or motivation. Numerical cues (Experiment 4, n = 9) were not effective. (c) 2015 APA, all rights reserved).
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Influence of social presence on eye movements in visual search tasks.
Liu, Na; Yu, Ruifeng
2017-12-01
This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.
EEG and Eye Tracking Signatures of Target Encoding during Structured Visual Search
Brouwer, Anne-Marie; Hogervorst, Maarten A.; Oudejans, Bob; Ries, Anthony J.; Touryan, Jonathan
2017-01-01
EEG and eye tracking variables are potential sources of information about the underlying processes of target detection and storage during visual search. Fixation duration, pupil size and event related potentials (ERPs) locked to the onset of fixation or saccade (saccade-related potentials, SRPs) have been reported to differ dependent on whether a target or a non-target is currently fixated. Here we focus on the question of whether these variables also differ between targets that are subsequently reported (hits) and targets that are not (misses). Observers were asked to scan 15 locations that were consecutively highlighted for 1 s in pseudo-random order. Highlighted locations displayed either a target or a non-target stimulus with two, three or four targets per trial. After scanning, participants indicated which locations had displayed a target. To induce memory encoding failures, participants concurrently performed an aurally presented math task (high load condition). In a low load condition, participants ignored the math task. As expected, more targets were missed in the high compared with the low load condition. For both conditions, eye tracking features distinguished better between hits and misses than between targets and non-targets (with larger pupil size and shorter fixations for missed compared with correctly encoded targets). In contrast, SRP features distinguished better between targets and non-targets than between hits and misses (with average SRPs showing larger P300 waveforms for targets than for non-targets). Single trial classification results were consistent with these averages. This work suggests complementary contributions of eye and EEG measures in potential applications to support search and detect tasks. SRPs may be useful to monitor what objects are relevant to an observer, and eye variables may indicate whether the observer should be reminded of them later. PMID:28559807
A neurocomputational model of figure-ground discrimination and target tracking.
Sun, H; Liu, L; Guo, A
1999-01-01
A neurocomputational model is presented for figureground discrimination and target tracking. In the model, the elementary motion detectors of the correlation type, the computational modules of saccadic and smooth pursuit eye movement, an oscillatory neural-network motion perception module and a selective attention module are involved. It is shown that through the oscillatory amplitude and frequency encoding, and selective synchronization of phase oscillators, the figure and the ground can be successfully discriminated from each other. The receptive fields developed by hidden units of the networks were surprisingly similar to the actual receptive fields and columnar organization found in the primate visual cortex. It is suggested that equivalent mechanisms may exist in the primate visual cortex to discriminate figure-ground in both temporal and spatial domains.
Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.
2011-01-01
Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035
Fixed-base simulator study of the effect of time delays in visual cues on pilot tracking performance
NASA Technical Reports Server (NTRS)
Queijo, M. J.; Riley, D. R.
1975-01-01
Factors were examined which determine the amount of time delay acceptable in the visual feedback loop in flight simulators. Acceptable time delays are defined as delays which significantly affect neither the results nor the manner in which the subject 'flies' the simulator. The subject tracked a target aircraft as it oscillated sinusoidally in a vertical plane only. The pursuing aircraft was permitted five degrees of freedom. Time delays of from 0.047 to 0.297 second were inserted in the visual feedback loop. A side task was employed to maintain the workload constant and to insure that the pilot was fully occupied during the experiment. Tracking results were obtained for 17 aircraft configurations having different longitudinal short-period characteristics. Results show a positive correlation between improved handling qualities and a longer acceptable time delay.
Context effects on smooth pursuit and manual interception of a disappearing target.
Kreyenmeier, Philipp; Fooken, Jolande; Spering, Miriam
2017-07-01
In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments ( n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points. NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points. Copyright © 2017 the American Physiological Society.
Carrara, Verena I; Darakomon, Mue Chae; Thin, Nant War War; Paw, Naw Ta Kaw; Wah, Naw; Wah, Hser Gay; Helen, Naw; Keereecharoen, Suporn; Paw, Naw Ta Mlar; Jittamala, Podjanee; Nosten, François H; Ricci, Daniela; McGready, Rose
2016-01-01
Neurological examination, including visual fixation and tracking of a target, is routinely performed in the Shoklo Malaria Research Unit postnatal care units on the Thailand-Myanmar border. We aimed to evaluate a simple visual newborn test developed in Italy and performed by non-specialized personnel working in neonatal care units. An intensive training of local health staff in Thailand was conducted prior to performing assessments at 24, 48 and 72 hours of life in healthy, low-risk term singletons. The 48 and 72 hours results were then compared to values obtained to those from Italy. Parents and staff administering the test reported on acceptability. One hundred and seventy nine newborns, between June 2011 and October 2012, participated in the study. The test was rapidly completed if the infant remained in an optimal behavioral stage (7 ± 2 minutes) but the test duration increased significantly (12 ± 4 minutes, p < 0.001) if its behavior changed. Infants were able to fix a target and to discriminate a colored face at 24 hours of life. Horizontal tracking of a target was achieved by 96% (152/159) of the infants at 48 hours. Circular tracking, stripe discrimination and attention to distance significantly improved between each 24-hour test period. The test was easily performed by non-specialized local staff and well accepted by the parents. Healthy term singletons in this limited-resource setting have a visual response similar to that obtained to gestational age matched newborns in Italy. It is possible to use these results as a reference set of values for the visual assessment in Karen and Burmese infants in the first 72 hours of life. The utility of the 24 hours test should be pursued.
NASA Technical Reports Server (NTRS)
Huebner, W. P.; Leigh, R. J.; Seidman, S. H.; Thomas, C. W.; Billian, C.; DiScenna, A. O.; Dell'Osso, L. F.
1992-01-01
1. We used a modeling approach to test the hypothesis that, in humans, the smooth pursuit (SP) system provides the primary signal for cancelling the vestibuloocular reflex (VOR) during combined eye-head tracking (CEHT) of a target moving smoothly in the horizontal plane. Separate models for SP and the VOR were developed. The optimal values of parameters of the two models were calculated using measured responses of four subjects to trials of SP and the visually enhanced VOR. After optimal parameter values were specified, each model generated waveforms that accurately reflected the subjects' responses to SP and vestibular stimuli. The models were then combined into a CEHT model wherein the final eye movement command signal was generated as the linear summation of the signals from the SP and VOR pathways. 2. The SP-VOR superposition hypothesis was tested using two types of CEHT stimuli, both of which involved passive rotation of subjects in a vestibular chair. The first stimulus consisted of a "chair brake" or sudden stop of the subject's head during CEHT; the visual target continued to move. The second stimulus consisted of a sudden change from the visually enhanced VOR to CEHT ("delayed target onset" paradigm); as the vestibular chair rotated past the angular position of the stationary visual stimulus, the latter started to move in synchrony with the chair. Data collected during experiments that employed these stimuli were compared quantitatively with predictions made by the CEHT model. 3. During CEHT, when the chair was suddenly and unexpectedly stopped, the eye promptly began to move in the orbit to track the moving target. Initially, gaze velocity did not completely match target velocity, however; this finally occurred approximately 100 ms after the brake onset. The model did predict the prompt onset of eye-in-orbit motion after the brake, but it did not predict that gaze velocity would initially be only approximately 70% of target velocity. One possible explanation for this discrepancy is that VOR gain can be dynamically modulated and, during sustained CEHT, it may assume a lower value. Consequently, during CEHT, a smaller-amplitude SP signal would be needed to cancel the lower-gain VOR. This reduction of the SP signal could account for the attenuated tracking response observed immediately after the brake. We found evidence for the dynamic modulation of VOR gain by noting differences in responses to the onset and offset of head rotation in trials of the visually enhanced VOR.(ABSTRACT TRUNCATED AT 400 WORDS).
Development of internal models and predictive abilities for visual tracking during childhood
Ego, Caroline; Yüksel, Demet
2015-01-01
The prediction of the consequences of our own actions through internal models is an essential component of motor control. Previous studies showed improvement of anticipatory behaviors with age for grasping, drawing, and postural control. Since these actions require visual and proprioceptive feedback, these improvements might reflect both the development of internal models and the feedback control. In contrast, visual tracking of a temporarily invisible target gives specific markers of prediction and internal models for eye movements. Therefore, we recorded eye movements in 50 children (aged 5–19 yr) and in 10 adults, who were asked to pursue a visual target that is temporarily blanked. Results show that the youngest children (5–7 yr) have a general oculomotor behavior in this task, qualitatively similar to the one observed in adults. However, the overall performance of older subjects in terms of accuracy at target reappearance and variability in their behavior was much better than the youngest children. This late maturation of predictive mechanisms with age was reflected into the development of the accuracy of the internal models governing the synergy between the saccadic and pursuit systems with age. Altogether, we hypothesize that the maturation of the interaction between smooth pursuit and saccades that relies on internal models of the eye and target displacement is related to the continuous maturation of the cerebellum. PMID:26510757
Development of internal models and predictive abilities for visual tracking during childhood.
Ego, Caroline; Yüksel, Demet; Orban de Xivry, Jean-Jacques; Lefèvre, Philippe
2016-01-01
The prediction of the consequences of our own actions through internal models is an essential component of motor control. Previous studies showed improvement of anticipatory behaviors with age for grasping, drawing, and postural control. Since these actions require visual and proprioceptive feedback, these improvements might reflect both the development of internal models and the feedback control. In contrast, visual tracking of a temporarily invisible target gives specific markers of prediction and internal models for eye movements. Therefore, we recorded eye movements in 50 children (aged 5-19 yr) and in 10 adults, who were asked to pursue a visual target that is temporarily blanked. Results show that the youngest children (5-7 yr) have a general oculomotor behavior in this task, qualitatively similar to the one observed in adults. However, the overall performance of older subjects in terms of accuracy at target reappearance and variability in their behavior was much better than the youngest children. This late maturation of predictive mechanisms with age was reflected into the development of the accuracy of the internal models governing the synergy between the saccadic and pursuit systems with age. Altogether, we hypothesize that the maturation of the interaction between smooth pursuit and saccades that relies on internal models of the eye and target displacement is related to the continuous maturation of the cerebellum. Copyright © 2016 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Anderson, Monica; David, Phillip
2007-04-01
Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.
Top-down contextual knowledge guides visual attention in infancy.
Tummeltshammer, Kristen; Amso, Dima
2017-10-26
The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.
Oculometric Assessment of Dynamic Visual Processing
NASA Technical Reports Server (NTRS)
Liston, Dorion Bryce; Stone, Lee
2014-01-01
Eye movements are the most frequent (3 per second), shortest-latency (150-250 ms), and biomechanically simplest (1 joint, no inertial complexities) voluntary motor behavior in primates, providing a model system to assess sensorimotor disturbances arising from trauma, fatigue, aging, or disease states (e.g., Diefendorf and Dodge, 1908). We developed a 15-minute behavioral tracking protocol consisting of randomized stepramp radial target motion to assess several aspects of the behavioral response to dynamic visual motion, including pursuit initiation, steadystate tracking, direction-tuning, and speed-tuning thresholds. This set of oculomotor metrics provide valid and reliable measures of dynamic visual performance (Stone and Krauzlis, 2003; Krukowski and Stone, 2005; Stone et al, 2009; Liston and Stone, 2014), and may prove to be a useful assessment tool for functional impairments of dynamic visual processing.
2D/3D Visual Tracker for Rover Mast
NASA Technical Reports Server (NTRS)
Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria
2006-01-01
A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.
Perceptual training yields rapid improvements in visually impaired youth.
Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje
2016-11-30
Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.
Bodala, Indu P; Abbasi, Nida I; Yu Sun; Bezerianos, Anastasios; Al-Nashash, Hasan; Thakor, Nitish V
2017-07-01
Eye tracking offers a practical solution for monitoring cognitive performance in real world tasks. However, eye tracking in dynamic environments is difficult due to high spatial and temporal variation of stimuli, needing further and thorough investigation. In this paper, we study the possibility of developing a novel computer vision assisted eye tracking analysis by using fixations. Eye movement data is obtained from a long duration naturalistic driving experiment. Source invariant feature transform (SIFT) algorithm was implemented using VLFeat toolbox to identify multiple areas of interest (AOIs). A new measure called `fixation score' was defined to understand the dynamics of fixation position between the target AOI and the non target AOIs. Fixation score is maximum when the subjects focus on the target AOI and diminishes when they gaze at the non-target AOIs. Statistically significant negative correlation was found between fixation score and reaction time data (r =-0.2253 and p<;0.05). This implies that with vigilance decrement, the fixation score decreases due to visual attention shifting away from the target objects resulting in an increase in the reaction time.
Learning the trajectory of a moving visual target and evolution of its tracking in the monkey
Bourrelly, Clara; Quinet, Julie; Cavanagh, Patrick
2016-01-01
An object moving in the visual field triggers a saccade that brings its image onto the fovea. It is followed by a combination of slow eye movements and catch-up saccades that try to keep the target image on the fovea as long as possible. The accuracy of this ability to track the “here-and-now” location of a visual target contrasts with the spatiotemporally distributed nature of its encoding in the brain. We show in six experimentally naive monkeys how this performance is acquired and gradually evolves during successive daily sessions. During the early exposure, the tracking is mostly saltatory, made of relatively large saccades separated by low eye velocity episodes, demonstrating that accurate (here and now) pursuit is not spontaneous and that gaze direction lags behind its location most of the time. Over the sessions, while the pursuit velocity is enhanced, the gaze is more frequently directed toward the current target location as a consequence of a 25% reduction in the number of catch-up saccades and a 37% reduction in size (for the first saccade). This smoothing is observed at several scales: during the course of single trials, across the set of trials within a session, and over successive sessions. We explain the neurophysiological processes responsible for this combined evolution of saccades and pursuit in the absence of stringent training constraints. More generally, our study shows that the oculomotor system can be used to discover the neural mechanisms underlying the ability to synchronize a motor effector with a dynamic external event. PMID:27683886
Proof-of-concept of a laser mounted endoscope for touch-less navigated procedures
Kral, Florian; Gueler, Oezguer; Perwoeg, Martina; Bardosi, Zoltan; Puschban, Elisabeth J; Riechelmann, Herbert; Freysinger, Wolfgang
2013-01-01
Background and Objectives During navigated procedures a tracked pointing device is used to define target structures in the patient to visualize its position in a registered radiologic data set. When working with endoscopes in minimal invasive procedures, the target region is often difficult to reach and changing instruments is disturbing in a challenging, crucial moment of the procedure. We developed a device for touch less navigation during navigated endoscopic procedures. Materials and Methods A laser beam is delivered to the tip of a tracked endoscope angled to its axis. Thereby the position of the laser spot in the video-endoscopic images changes according to the distance between the tip of the endoscope and the target structure. A mathematical function is defined by a calibration process and is used to calculate the distance between the tip of the endoscope and the target. The tracked tip of the endoscope and the calculated distance is used to visualize the laser spot in the registered radiologic data set. Results In comparison to the tracked instrument, the touch less target definition with the laser spot yielded in an over and above error of 0.12 mm. The overall application error in this experimental setup with a plastic head was 0.61 ± 0.97 mm (95% CI −1.3 to +2.5 mm). Conclusion Integrating a laser in an endoscope and then calculating the distance to a target structure by image processing of the video endoscopic images is accurate. This technology eliminates the need for tracked probes intraoperatively and therefore allows navigation to be integrated seamlessly in clinical routine. However, it is an additional chain link in the sequence of computer-assisted surgery thus influencing the application error. Lasers Surg. Med. 45:377–382, 2013. © 2013 Wiley Periodicals, Inc. PMID:23737122
Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2016-05-01
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.
The effect of visual-motion time delays on pilot performance in a pursuit tracking task
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.; Riley, D. R.
1976-01-01
A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.
Exploring What's Missing: What Do Target Absent Trials Reveal about Autism Search Superiority?
ERIC Educational Resources Information Center
Keehn, Brandon; Joseph, Robert M.
2016-01-01
We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of…
Eye Tracking of Occluded Self-Moved Targets: Role of Haptic Feedback and Hand-Target Dynamics.
Danion, Frederic; Mathew, James; Flanagan, J Randall
2017-01-01
Previous studies on smooth pursuit eye movements have shown that humans can continue to track the position of their hand, or a target controlled by the hand, after it is occluded, thereby demonstrating that arm motor commands contribute to the prediction of target motion driving pursuit eye movements. Here, we investigated this predictive mechanism by manipulating both the complexity of the hand-target mapping and the provision of haptic feedback. Two hand-target mappings were used, either a rigid (simple) one in which hand and target motion matched perfectly or a nonrigid (complex) one in which the target behaved as a mass attached to the hand by means of a spring. Target animation was obtained by asking participants to oscillate a lightweight robotic device that provided (or not) haptic feedback consistent with the target dynamics. Results showed that as long as 7 s after target occlusion, smooth pursuit continued to be the main contributor to total eye displacement (∼60%). However, the accuracy of eye-tracking varied substantially across experimental conditions. In general, eye-tracking was less accurate under the nonrigid mapping, as reflected by higher positional and velocity errors. Interestingly, haptic feedback helped to reduce the detrimental effects of target occlusion when participants used the nonrigid mapping, but not when they used the rigid one. Overall, we conclude that the ability to maintain smooth pursuit in the absence of visual information can extend to complex hand-target mappings, but the provision of haptic feedback is critical for the maintenance of accurate eye-tracking performance.
Eye Tracking of Occluded Self-Moved Targets: Role of Haptic Feedback and Hand-Target Dynamics
Mathew, James
2017-01-01
Abstract Previous studies on smooth pursuit eye movements have shown that humans can continue to track the position of their hand, or a target controlled by the hand, after it is occluded, thereby demonstrating that arm motor commands contribute to the prediction of target motion driving pursuit eye movements. Here, we investigated this predictive mechanism by manipulating both the complexity of the hand-target mapping and the provision of haptic feedback. Two hand-target mappings were used, either a rigid (simple) one in which hand and target motion matched perfectly or a nonrigid (complex) one in which the target behaved as a mass attached to the hand by means of a spring. Target animation was obtained by asking participants to oscillate a lightweight robotic device that provided (or not) haptic feedback consistent with the target dynamics. Results showed that as long as 7 s after target occlusion, smooth pursuit continued to be the main contributor to total eye displacement (∼60%). However, the accuracy of eye-tracking varied substantially across experimental conditions. In general, eye-tracking was less accurate under the nonrigid mapping, as reflected by higher positional and velocity errors. Interestingly, haptic feedback helped to reduce the detrimental effects of target occlusion when participants used the nonrigid mapping, but not when they used the rigid one. Overall, we conclude that the ability to maintain smooth pursuit in the absence of visual information can extend to complex hand-target mappings, but the provision of haptic feedback is critical for the maintenance of accurate eye-tracking performance. PMID:28680964
Towards accurate localization: long- and short-term correlation filters for tracking
NASA Astrophysics Data System (ADS)
Li, Minglangjun; Tian, Chunna
2018-04-01
Visual tracking is a challenging problem, especially using a single model. In this paper, we propose a discriminative correlation filter (DCF) based tracking approach that exploits both the long-term and short-term information of the target, named LSTDCF, to improve the tracking performance. In addition to a long-term filter learned through the whole sequence, a short-term filter is trained using only features extracted from most recent frames. The long-term filter tends to capture more semantics of the target as more frames are used for training. However, since the target may undergo large appearance changes, features extracted around the target in non-recent frames prevent the long-term filter from locating the target in the current frame accurately. In contrast, the short-term filter learns more spatial details of the target from recent frames but gets over-fitting easily. Thus the short-term filter is less robust to handle cluttered background and prone to drift. We take the advantage of both filters and fuse their response maps to make the final estimation. We evaluate our approach on a widely-used benchmark with 100 image sequences and achieve state-of-the-art results.
NASA Astrophysics Data System (ADS)
Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao
2018-01-01
Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.
Real-time visual tracking of less textured three-dimensional objects on mobile platforms
NASA Astrophysics Data System (ADS)
Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il
2012-12-01
Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.
Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka
2017-04-01
Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel
2016-01-01
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221
Synchronizing the tracking eye movements with the motion of a visual target: Basic neural processes.
Goffart, Laurent; Bourrelly, Clara; Quinet, Julie
2017-01-01
In primates, the appearance of an object moving in the peripheral visual field elicits an interceptive saccade that brings the target image onto the foveae. This foveation is then maintained more or less efficiently by slow pursuit eye movements and subsequent catch-up saccades. Sometimes, the tracking is such that the gaze direction looks spatiotemporally locked onto the moving object. Such a spatial synchronism is quite spectacular when one considers that the target-related signals are transmitted to the motor neurons through multiple parallel channels connecting separate neural populations with different conduction speeds and delays. Because of the delays between the changes of retinal activity and the changes of extraocular muscle tension, the maintenance of the target image onto the fovea cannot be driven by the current retinal signals as they correspond to past positions of the target. Yet, the spatiotemporal coincidence observed during pursuit suggests that the oculomotor system is driven by a command estimating continuously the current location of the target, i.e., where it is here and now. This inference is also supported by experimental perturbation studies: when the trajectory of an interceptive saccade is experimentally perturbed, a correction saccade is produced in flight or after a short delay, and brings the gaze next to the location where unperturbed saccades would have landed at about the same time, in the absence of visual feedback. In this chapter, we explain how such correction can be supported by previous visual signals without assuming "predictive" signals encoding future target locations. We also describe the basic neural processes which gradually yield the synchronization of eye movements with the target motion. When the process fails, the gaze is driven by signals related to past locations of the target, not by estimates to its upcoming locations, and a catch-up is made to reinitiate the synchronization. © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Tonkay, Gregory
1990-01-01
The following separate topics are addressed: (1) improving a robotic tracking system; and (2) providing insights into orbiter position calibration for radiator inspection. The objective of the tracking system project was to provide the capability to track moving targets more accurately by adjusting parameters in the control system and implementing a predictive algorithm. A computer model was developed to emulate the tracking system. Using this model as a test bed, a self-tuning algorithm was developed to tune the system gains. The model yielded important findings concerning factors that affect the gains. The self-tuning algorithms will provide the concepts to write a program to automatically tune the gains in the real system. The section concerning orbiter position calibration provides a comparison to previous work that had been performed for plant growth. It provided the conceptualized routines required to visually determine the orbiter position and orientation. Furthermore, it identified the types of information which are required to flow between the robot controller and the vision system.
Long-term object tracking combined offline with online learning
NASA Astrophysics Data System (ADS)
Hu, Mengjie; Wei, Zhenzhong; Zhang, Guangjun
2016-04-01
We propose a simple yet effective method for long-term object tracking. Different from the traditional visual tracking method, which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion. To summarize, our algorithm can be roughly decomposed into an initialization stage and a tracking stage. In the initialization stage, an offline detector is trained to get the object appearance information at the category level, which is used for detecting the potential target and initializing the tracking stage. The tracking stage consists of three modules: the online tracking module, detection module, and decision module. A pretrained detector is used for maintaining drift of the online tracker, while the online tracker is used for filtering out false positive detections. A confidence selection mechanism is proposed to optimize the object location based on the online tracker and detection. If the target is lost, the pretrained detector is utilized to reinitialize the whole algorithm when the target is relocated. During experiments, we evaluate our method on several challenging video sequences, and it demonstrates huge improvement compared with detection and online tracking only.
Colomb, Julien; Reiter, Lutz; Blaszkiewicz, Jedrzej; Wessnitzer, Jan; Brembs, Bjoern
2012-01-01
Insects have been among the most widely used model systems for studying the control of locomotion by nervous systems. In Drosophila, we implemented a simple test for locomotion: in Buridan's paradigm, flies walk back and forth between two inaccessible visual targets [1]. Until today, the lack of easily accessible tools for tracking the fly position and analyzing its trajectory has probably contributed to the slow acceptance of Buridan's paradigm. We present here a package of open source software designed to track a single animal walking in a homogenous environment (Buritrack) and to analyze its trajectory. The Centroid Trajectory Analysis (CeTrAn) software is coded in the open source statistics project R. It extracts eleven metrics and includes correlation analyses and a Principal Components Analysis (PCA). It was designed to be easily customized to personal requirements. In combination with inexpensive hardware, these tools can readily be used for teaching and research purposes. We demonstrate the capabilities of our package by measuring the locomotor behavior of adult Drosophila melanogaster (whose wings were clipped), either in the presence or in the absence of visual targets, and comparing the latter to different computer-generated data. The analysis of the trajectories confirms that flies are centrophobic and shows that inaccessible visual targets can alter the orientation of the flies without changing their overall patterns of activity. Using computer generated data, the analysis software was tested, and chance values for some metrics (as well as chance value for their correlation) were set. Our results prompt the hypothesis that fixation behavior is observed only if negative phototaxis can overcome the propensity of the flies to avoid the center of the platform. Together with our companion paper, we provide new tools to promote Open Science as well as the collection and analysis of digital behavioral data.
Receptive fields for smooth pursuit eye movements and motion perception.
Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R
2010-12-01
Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.
Improving visual search in instruction manuals using pictograms.
Kovačević, Dorotea; Brozović, Maja; Možina, Klementina
2016-11-01
Instruction manuals provide important messages about the proper use of a product. They should communicate in such a way that they facilitate users' searches for specific information. Despite the increasing research interest in visual search, there is a lack of empirical knowledge concerning the role of pictograms in search performance during the browsing of a manual's pages. This study investigates how the inclusion of pictograms improves the search for the target information. Furthermore, it examines whether this search process is influenced by the visual similarity between the pictograms and the searched for information. On the basis of eye-tracking measurements, as objective indicators of the participants' visual attention, it was found that pictograms can be a useful element of search strategy. Another interesting finding was that boldface highlighting is a more effective method for improving user experience in information seeking, rather than the similarity between the pictorial and adjacent textual information. Implications for designing effective user manuals are discussed. Practitioner Summary: Users often view instruction manuals with the aim of finding specific information. We used eye-tracking technology to examine different manual pages in order to improve the user's visual search for target information. The results indicate that the use of pictograms and bold highlighting of relevant information facilitate the search process.
Visual tracking using objectness-bounding box regression and correlation filters
NASA Astrophysics Data System (ADS)
Mbelwa, Jimmy T.; Zhao, Qingjie; Lu, Yao; Wang, Fasheng; Mbise, Mercy
2018-03-01
Visual tracking is a fundamental problem in computer vision with extensive application domains in surveillance and intelligent systems. Recently, correlation filter-based tracking methods have shown a great achievement in terms of robustness, accuracy, and speed. However, such methods have a problem of dealing with fast motion (FM), motion blur (MB), illumination variation (IV), and drifting caused by occlusion (OCC). To solve this problem, a tracking method that integrates objectness-bounding box regression (O-BBR) model and a scheme based on kernelized correlation filter (KCF) is proposed. The scheme based on KCF is used to improve the tracking performance of FM and MB. For handling drift problem caused by OCC and IV, we propose objectness proposals trained in bounding box regression as prior knowledge to provide candidates and background suppression. Finally, scheme KCF as a base tracker and O-BBR are fused to obtain a state of a target object. Extensive experimental comparisons of the developed tracking method with other state-of-the-art trackers are performed on some of the challenging video sequences. Experimental comparison results show that our proposed tracking method outperforms other state-of-the-art tracking methods in terms of effectiveness, accuracy, and robustness.
ERIC Educational Resources Information Center
Mishra, Ramesh Kumar; Singh, Niharika
2014-01-01
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
An improved KCF tracking algorithm based on multi-feature and multi-scale
NASA Astrophysics Data System (ADS)
Wu, Wei; Wang, Ding; Luo, Xin; Su, Yang; Tian, Weiye
2018-02-01
The purpose of visual tracking is to associate the target object in a continuous video frame. In recent years, the method based on the kernel correlation filter has become the research hotspot. However, the algorithm still has some problems such as video capture equipment fast jitter, tracking scale transformation. In order to improve the ability of scale transformation and feature description, this paper has carried an innovative algorithm based on the multi feature fusion and multi-scale transform. The experimental results show that our method solves the problem that the target model update when is blocked or its scale transforms. The accuracy of the evaluation (OPE) is 77.0%, 75.4% and the success rate is 69.7%, 66.4% on the VOT and OTB datasets. Compared with the optimal one of the existing target-based tracking algorithms, the accuracy of the algorithm is improved by 6.7% and 6.3% respectively. The success rates are improved by 13.7% and 14.2% respectively.
Perceptual training yields rapid improvements in visually impaired youth
Nyquist, Jeffrey B.; Lappin, Joseph S.; Zhang, Ruyuan; Tadin, Duje
2016-01-01
Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events. PMID:27901026
Robust visual tracking using a contextual boosting approach
NASA Astrophysics Data System (ADS)
Jiang, Wanyue; Wang, Yin; Wang, Daobo
2018-03-01
In recent years, detection-based image trackers have been gaining ground rapidly, thanks to its capacity of incorporating a variety of image features. Nevertheless, its tracking performance might be compromised if background regions are mislabeled as foreground in the training process. To resolve this problem, we propose an online visual tracking algorithm designated to improving the training label accuracy in the learning phase. In the proposed method, superpixels are used as samples, and their ambiguous labels are reassigned in accordance with both prior estimation and contextual information. The location and scale of the target are usually determined by confidence map, which is doomed to shrink since background regions are always incorporated into the bounding box. To address this dilemma, we propose a cross projection scheme via projecting the confidence map for target detecting. Moreover, the performance of the proposed tracker can be further improved by adding rigid-structured information. The proposed method is evaluated on the basis of the OTB benchmark and the VOT2016 benchmark. Compared with other trackers, the results appear to be competitive.
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.; Riley, D. R.
1977-01-01
An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.
Position estimation and driving of an autonomous vehicle by monocular vision
NASA Astrophysics Data System (ADS)
Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.
2007-04-01
Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.
Tracking the impact of depression in a perspective-taking task.
Ferguson, Heather J; Cane, James
2017-11-01
Research has identified impairments in Theory of Mind (ToM) abilities in depressed patients, particularly in relation to tasks involving empathetic responses and belief reasoning. We aimed to build on this research by exploring the relationship between depressed mood and cognitive ToM, specifically visual perspective-taking ability. High and low depressed participants were eye-tracked as they completed a perspective-taking task, in which they followed the instructions of a 'director' to move target objects (e.g. a "teapot with spots on") around a grid, in the presence of a temporarily-ambiguous competitor object (e.g. a "teapot with stars on"). Importantly, some of the objects in the grid were occluded from the director's (but not the participant's) view. Results revealed no group-based difference in participants' ability to use perspective cues to identify the target object. All participants were faster to select the target object when the competitor was only available to the participant, compared to when the competitor was mutually available to the participant and director. Eye-tracking measures supported this pattern, revealing that perspective directed participants' visual search immediately upon hearing the ambiguous object's name (e.g. "teapot"). We discuss how these results fit with previous studies that have shown a negative relationship between depression and ToM.
Siamese convolutional networks for tracking the spine motion
NASA Astrophysics Data System (ADS)
Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong
2017-09-01
Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.
Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD
ERIC Educational Resources Information Center
Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne
2015-01-01
We recorded visual event-related brain potentials from 32 adult male participants (16 high-functioning participants diagnosed with autism spectrum disorder (ASD) and 16 control participants, ranging in age from 18 to 53 years) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability…
Brain activation of semantic category-based grouping in multiple identity tracking task
Wei, Liuqing; Lyu, Chuang; Hu, Siyuan; Li, Zhen
2017-01-01
Using Multiple Identity Tracking task and the functional magnetic resonance imaging (fMRI) technology, the present study aimed to isolate and visualize the functional anatomy of neural systems involved in the semantic category-based grouping process. Three experiment conditions were selected and compared: the category-based targets grouping (TG) condition, the targets-distractors grouping (TDG) condition and the homogenous condition. In the TG condition, observers could utilize the categorical distinction between targets and distractors, to construct a uniform presentation of targets, that is, to form a group of the targets to facilitate tracking. In the TDG condition, half the targets and half the distractors belonged to the same category. Observers had to inhibit the grouping of targets and distractors in one category to complete tracking. In the homogenous condition, where targets and distractors consisted of the same objects, no grouping could be formed. The “TG-Homogenous” contrast (p<0.01) revealed the activation of the left fusiform and the pars triangularis of inferior frontal gyrus (IFG). The “TG-TDG” contrast only revealed the activation of the left anterior cingulate gyrus (ACC). The fusiform and IFG pars triangularis might participate in the representation of semantic knowledge, IFG pars triangularis might relate intensely with the classification of semantic categories. The ACC might be responsible for the initiation and maintenance of grouping representation. PMID:28505166
Fast object reconstruction in block-based compressive low-light-level imaging
NASA Astrophysics Data System (ADS)
Ke, Jun; Sui, Dong; Wei, Ping
2014-11-01
In this paper we propose a simply yet effective and efficient method for long-term object tracking. Different from traditional visual tracking method which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion problem. To summarize, our algorithm can be roughly decomposed in a initialization stage and a tracking stage. In the initialization stage, an offline classifier is trained to get the object appearance information in category level. When the video stream is coming, the pre-trained offline classifier is used for detecting the potential target and initializing the tracking stage. In the tracking stage, it consists of three parts which are online tracking part, offline tracking part and confidence judgment part. Online tracking part captures the specific target appearance information while detection part localizes the object based on the pre-trained offline classifier. Since there is no data dependence between online tracking and offline detection, these two parts are running in parallel to significantly improve the processing speed. A confidence selection mechanism is proposed to optimize the object location. Besides, we also propose a simple mechanism to judge the absence of the object. If the target is lost, the pre-trained offline classifier is utilized to re-initialize the whole algorithm as long as the target is re-located. During experiment, we evaluate our method on several challenging video sequences and demonstrate competitive results.
NASA Astrophysics Data System (ADS)
Oku, H.; Ogawa, N.; Ishikawa, M.; Hashimoto, K.
2005-03-01
In this article, a micro-organism tracking system using a high-speed vision system is reported. This system two dimensionally tracks a freely swimming micro-organism within the field of an optical microscope by moving a chamber of target micro-organisms based on high-speed visual feedback. The system we developed could track a paramecium using various imaging techniques, including bright-field illumination, dark-field illumination, and differential interference contrast, at magnifications of 5 times and 20 times. A maximum tracking duration of 300s was demonstrated. Also, the system could track an object with a velocity of up to 35 000μm/s (175diameters/s), which is significantly faster than swimming micro-organisms.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.
Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen
2009-11-15
The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level.
Improvement of Hungarian Joint Terminal Attack Program
2013-06-13
LST Laser Spot Tracker NVG Night Vision Goggle ROMAD Radio Operator Maintainer and Driver ROVER Remotely Operated Video Enhanced Receiver TACP...visual target designation. The other component consists of a laser spot tracker (LST), which identifies targets by tracking laser energy reflecting...capability for every type of night time missions, laser spot tracker for laser spot search missions, remotely operated video enhanced receiver
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
Real time tracking by LOPF algorithm with mixture model
NASA Astrophysics Data System (ADS)
Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo
2007-11-01
A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.
Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua
2013-12-01
This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.
2014-09-01
the feature-space used to represent the target. Sometimes we trade off keeping information about one domain of the target in exchange for robustness... Kullback - Leibler distance), can be used as a similarity function between a candidate target and a template. This approach is invariant to changes in scale...basis vectors to adapt to appearance change and learns the visual information that the set of targets have in common, which is used to reduce the
A Novel Ship-Tracking Method for GF-4 Satellite Sequential Images.
Yao, Libo; Liu, Yong; He, You
2018-06-22
The geostationary remote sensing satellite has the capability of wide scanning, persistent observation and operational response, and has tremendous potential for maritime target surveillance. The GF-4 satellite is the first geostationary orbit (GEO) optical remote sensing satellite with medium resolution in China. In this paper, a novel ship-tracking method in GF-4 satellite sequential imagery is proposed. The algorithm has three stages. First, a local visual saliency map based on local peak signal-to-noise ratio (PSNR) is used to detect ships in a single frame of GF-4 satellite sequential images. Second, the accuracy positioning of each potential target is realized by a dynamic correction using the rational polynomial coefficients (RPCs) and automatic identification system (AIS) data of ships. Finally, an improved multiple hypotheses tracking (MHT) algorithm with amplitude information is used to track ships by further removing the false targets, and to estimate ships’ motion parameters. The algorithm has been tested using GF-4 sequential images and AIS data. The results of the experiment demonstrate that the algorithm achieves good tracking performance in GF-4 satellite sequential images and estimates the motion information of ships accurately.
Nonlinear dynamics support a linear population code in a retinal target-tracking circuit.
Leonardo, Anthony; Meister, Markus
2013-10-23
A basic task faced by the visual system of many organisms is to accurately track the position of moving prey. The retina is the first stage in the processing of such stimuli; the nature of the transformation here, from photons to spike trains, constrains not only the ultimate fidelity of the tracking signal but also the ease with which it can be extracted by other brain regions. Here we demonstrate that a population of fast-OFF ganglion cells in the salamander retina, whose dynamics are governed by a nonlinear circuit, serve to compute the future position of the target over hundreds of milliseconds. The extrapolated position of the target is not found by stimulus reconstruction but is instead computed by a weighted sum of ganglion cell outputs, the population vector average (PVA). The magnitude of PVA extrapolation varies systematically with target size, speed, and acceleration, such that large targets are tracked most accurately at high speeds, and small targets at low speeds, just as is seen in the motion of real prey. Tracking precision reaches the resolution of single photoreceptors, and the PVA algorithm performs more robustly than several alternative algorithms. If the salamander brain uses the fast-OFF cell circuit for target extrapolation as we suggest, the circuit dynamics should leave a microstructure on the behavior that may be measured in future experiments. Our analysis highlights the utility of simple computations that, while not globally optimal, are efficiently implemented and have close to optimal performance over a limited but ethologically relevant range of stimuli.
Image navigation as a means to expand the boundaries of fluorescence-guided surgery
NASA Astrophysics Data System (ADS)
Brouwer, Oscar R.; Buckle, Tessa; Bunschoten, Anton; Kuil, Joeri; Vahrmeijer, Alexander L.; Wendler, Thomas; Valdés-Olmos, Renato A.; van der Poel, Henk G.; van Leeuwen, Fijs W. B.
2012-05-01
Hybrid tracers that are both radioactive and fluorescent help extend the use of fluorescence-guided surgery to deeper structures. Such hybrid tracers facilitate preoperative surgical planning using (3D) scintigraphic images and enable synchronous intraoperative radio- and fluorescence guidance. Nevertheless, we previously found that improved orientation during laparoscopic surgery remains desirable. Here we illustrate how intraoperative navigation based on optical tracking of a fluorescence endoscope may help further improve the accuracy of hybrid surgical guidance. After feeding SPECT/CT images with an optical fiducial as a reference target to the navigation system, optical tracking could be used to position the tip of the fluorescence endoscope relative to the preoperative 3D imaging data. This hybrid navigation approach allowed us to accurately identify marker seeds in a phantom setup. The multispectral nature of the fluorescence endoscope enabled stepwise visualization of the two clinically approved fluorescent dyes, fluorescein and indocyanine green. In addition, the approach was used to navigate toward the prostate in a patient undergoing robot-assisted prostatectomy. Navigation of the tracked fluorescence endoscope toward the target identified on SPECT/CT resulted in real-time gradual visualization of the fluorescent signal in the prostate, thus providing an intraoperative confirmation of the navigation accuracy.
Lukasova, Katerina; Silva, Isadora P.; Macedo, Elizeu C.
2016-01-01
Analysis of eye movement patterns during tracking tasks represents a potential way to identify differences in the cognitive processing and motor mechanisms underlying reading in dyslexic children before the occurrence of school failure. The current study aimed to evaluate the pattern of eye movements in antisaccades, predictive saccades and visually guided saccades in typical readers and readers with developmental dyslexia. The study included 30 children (age M = 11; SD = 1.67), 15 diagnosed with developmental dyslexia (DG) and 15 regular readers (CG), matched by age, gender and school grade. Cognitive assessment was performed prior to the eye-tracking task during which both eyes were registered using the Tobii® 1750 eye-tracking device. The results demonstrated a lower correct antisaccades rate in dyslexic children compared to the controls (p < 0.001, DG = 25%, CC = 37%). Dyslexic children also made fewer saccades in predictive latency (p < 0.001, DG = 34%, CG = 46%, predictive latency within −300–120 ms with target as 0 point). No between-group difference was found for visually guided saccades. In this task, both groups showed shorter latency for right-side targets. The results indicated altered oculomotor behavior in dyslexic children, which has been reported in previous studies. We extend these findings by demonstrating impaired implicit learning of target's time/position patterns in dyslexic children. PMID:27445945
The Deployment of Visual Attention
2006-03-01
targets: Evidence for memory-based control of attention. Psychonomic Bulletin & Review , 11(1), 71-76. Torralba, A. (2003). Modeling global scene...S., Fencsik, D. E., Tran, L., & Wolfe, J. M. (in press). How do we track invisible objects? Psychonomic Bulletin & Review . *Horowitz, T. S. (in press
Effects of sport expertise on representational momentum during timing control.
Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu
2015-04-01
Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.
NASA Astrophysics Data System (ADS)
Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian
2018-06-01
Infrared (IR) small target enhancement plays a significant role in modern infrared search and track (IRST) systems and is the basic technique of target detection and tracking. In this paper, a coarse-to-fine grey level mapping method using improved sigmoid transformation and saliency histogram is designed to enhance IR small targets under different backgrounds. For the stage of rough enhancement, the intensity histogram is modified via an improved sigmoid function so as to narrow the regular intensity range of background as much as possible. For the part of further enhancement, a linear transformation is accomplished based on a saliency histogram constructed by averaging the cumulative saliency values provided by a saliency map. Compared with other typical methods, the presented method can achieve both better visual performances and quantitative evaluations.
Chiu, Hsiu-Ching; Halaki, Mark; O'Dwyer, Nicholas
2013-04-30
Most previous studies of associated reactions (ARs) in people with cerebral palsy have used observation scales, such as recording the degree of movement through observation. The sensitive quantitative method can detect ARs that are not amply visible. The aim of this study was to provide quantitative measures of ARs during a visual pursuit position tracking task. Twenty-three hemiplegia (H) (mean +/- SD: 21y 8m +/- 11y 10m), twelve quadriplegia (Q) (21y 5m +/- 10y 3m) and twenty-two subjects with normal development (N) (21y 2m +/- 10y 10m) participated in the study. An upper limb visual pursuit tracking task was used to study ARs. The participants were required to follow a moving target with a response cursor via elbow flexion and extension movements. The occurrence of ARs was quantified by the overall coherence between the movements of tracking and non-tracking limbs and the amount of movement due to ARs was quantified by the amplitude of movement the non-tracking limbs. The amplitude of movement of the non-tracking limb indicated that the amount of ARs was larger in the Q group than the H and N groups with no significant differences between the H and N groups. The amplitude of movement of the non-tracking limb was larger during non-dominant than dominant tracking in all three groups. Some movements in the non-tracking limb were correlated with the tracking limb (correlated ARs) and some movements that were not correlated with the tracking limb (uncorrelated ARs). The correlated ARs comprised less than 40% of the total ARs for all three groups. Correlated ARs were negatively associated with clinical evaluations, but not the uncorrelated ARs. The correlated and uncorrelated ARs appear to have different relationships with clinical evaluations, implying the effect of ARs on upper limb activities could be varied.
NASA Technical Reports Server (NTRS)
Leigh, R. J.; Thurston, S. E.; Sharpe, J. A.; Ranalli, P. J.; Hamid, M. A.
1987-01-01
The effects of deficient labyrinthine function on smooth visual tracking with the eyes and head were investigated, using ten patients with bilateral peripheral vestibular disease and ten normal controls. Active, combined eye-head tracking (EHT) was significantly better in patients than smooth pursuit with the eyes alone, whereas normal subjects pursued equally well in both cases. Compensatory eye movements during active head rotation in darkness were always less in patients than in normal subjects. These data were used to examine current hypotheses that postulate central cancellation of the vestibulo-ocular reflex (VOR) during EHT. A model that proposes summation of an integral smooth pursuit command and VOR/compensatory eye movements is consistent with the findings. Observation of passive EHT (visual fixation of a head-fixed target during en bloc rotation) appears to indicate that in this mode parametric gain changes contribute to modulation of the VOR.
Xie, Xilei; Tang, Fuyan; Shangguan, Xiaoyan; Che, Shiyi; Niu, Jinye; Xiao, Yongsheng; Wang, Xu; Tang, Bo
2017-06-13
Lyso-TPFP presents lysosomal targetability and an acidic pH-activatable response toward formaldehyde. Thus, it exclusively visualizes lysosomal formaldehyde and is immune against it in neutral cytosol and other organelles. In addition, two-photon fluorescence imaging endows Lyso-TPFP with the capability of in situ tracking formaldehyde in live cells and animals.
The Perception of the Higher Derivatives of Visual Motion.
1986-06-24
uniform velocity in one run with a target mov- ing with either an accelerating or decelerating motion on another run , and had to decide on which of...the two runs the motion was uniform. It was found that sensitivity to acceleration (as indicated by proportion of correct dis- criminations) decreased...20 subjects had 8 In an experiment by Runeson (1975), one target (the stan- tracking runs with each of the three tvpes of moving target. The third
Orthographic versus semantic matching in visual search for words within lists.
Léger, Laure; Rouet, Jean-François; Ros, Christine; Vibert, Nicolas
2012-03-01
An eye-tracking experiment was performed to assess the influence of orthographic and semantic distractor words on visual search for words within lists. The target word (e.g., "raven") was either shown to participants before the search (literal search) or defined by its semantic category (e.g., "bird", categorical search). In both cases, the type of words included in the list affected visual search times and eye movement patterns. In the literal condition, the presence of orthographic distractors sharing initial and final letters with the target word strongly increased search times. Indeed, the orthographic distractors attracted participants' gaze and were fixated for longer times than other words in the list. The presence of semantic distractors related to the target word also increased search times, which suggests that significant automatic semantic processing of nontarget words took place. In the categorical condition, semantic distractors were expected to have a greater impact on the search task. As expected, the presence in the list of semantic associates of the target word led to target selection errors. However, semantic distractors did not significantly increase search times any more, whereas orthographic distractors still did. Hence, the visual characteristics of nontarget words can be strong predictors of the efficiency of visual search even when the exact target word is unknown. The respective impacts of orthographic and semantic distractors depended more on the characteristics of lists than on the nature of the search task.
Multiple-object tracking as a tool for parametrically modulating memory reactivation
Poppenk, J.; Norman, K.A.
2017-01-01
Converging evidence supports the “non-monotonic plasticity” hypothesis that although complete retrieval may strengthen memories, partial retrieval weakens them. Yet, the classic experimental paradigms used to study effects of partial retrieval are not ideally suited to doing so, because they lack the parametric control needed to ensure that the memory is activated to the appropriate degree (i.e., that there is some retrieval, but not enough to cause memory strengthening). Here we present a novel procedure designed to accommodate this need. After participants learned a list of word-scene associates, they completed a cued mental visualization task that was combined with a multiple-object tracking (MOT) procedure, which we selected for its ability to interfere with mental visualization in a parametrically adjustable way (by varying the number of MOT targets). We also used fMRI data to successfully train an “associative recall” classifier for use in this task: this classifier revealed greater memory reactivation during trials in which associative memories were cued while participants tracked one, rather than five MOT targets. However, the classifier was insensitive to task difficulty when recall was not taking place, suggesting it had indeed tracked memory reactivation rather than task difficulty per se. Consistent with the classifier findings, participants’ introspective ratings of visualization vividness were modulated by MOT task difficulty. In addition, we observed reduced classifier output and slowing of responses in a post-reactivation memory test, consistent with the hypothesis that partial reactivation, induced by MOT, weakened memory. These results serve as a “proof of concept” that MOT can be used to parametrically modulate memory retrieval – a property that may prove useful in future investigation of partial retrieval effects, e.g., in closed-loop experiments. PMID:28387587
Game theory-based visual tracking approach focusing on color and texture features.
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Chen, Chuanhua; Wang, Xin
2017-07-20
It is difficult for a single-feature tracking algorithm to achieve strong robustness under a complex environment. To solve this problem, we proposed a multifeature fusion tracking algorithm that is based on game theory. By focusing on color and texture features as two gamers, this algorithm accomplishes tracking by using a mean shift iterative formula to search for the Nash equilibrium of the game. The contribution of different features is always keeping the state of optical balance, so that the algorithm can fully take advantage of feature fusion. According to the experiment results, this algorithm proves to possess good performance, especially under the condition of scene variation, target occlusion, and similar interference.
Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking
Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua
2014-01-01
To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252
NASA Technical Reports Server (NTRS)
Riley, D. R.; Miller, G. K., Jr.
1978-01-01
The effect of time delay was determined in the visual and motion cues in a flight simulator on pilot performance in tracking a target aircraft that was oscillating sinusoidally in altitude only. An audio side task was used to assure the subject was fully occupied at all times. The results indicate that, within the test grid employed, about the same acceptable time delay (250 msec) was obtained for a single aircraft (fighter type) by each of two subjects for both fixed-base and motion-base conditions. Acceptable time delay is defined as the largest amount of delay that can be inserted simultaneously into the visual and motion cues before performance degradation occurs. A statistical analysis of the data was made to establish this value of time delay. Audio side task provided quantitative data that documented the subject's work level.
Linte, Cristian A; White, James; Eagleson, Roy; Guiraudon, Gérard M; Peters, Terry M
2010-01-01
Virtual and augmented reality environments have been adopted in medicine as a means to enhance the clinician's view of the anatomy and facilitate the performance of minimally invasive procedures. Their value is truly appreciated during interventions where the surgeon cannot directly visualize the targets to be treated, such as during cardiac procedures performed on the beating heart. These environments must accurately represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical tracking, and visualization technology in a common framework centered around the patient. This review begins with an overview of minimally invasive cardiac interventions, describes the architecture of a typical surgical guidance platform including imaging, tracking, registration and visualization, highlights both clinical and engineering accuracy limitations in cardiac image guidance, and discusses the translation of the work from the laboratory into the operating room together with typically encountered challenges.
Tracking the allocation of attention using human pupillary oscillations
Naber, Marnix; Alvarez, George A.; Nakayama, Ken
2013-01-01
The muscles that control the pupil are richly innervated by the autonomic nervous system. While there are central pathways that drive pupil dilations in relation to arousal, there is no anatomical evidence that cortical centers involved with visual selective attention innervate the pupil. In this study, we show that such connections must exist. Specifically, we demonstrate a novel Pupil Frequency Tagging (PFT) method, where oscillatory changes in stimulus brightness over time are mirrored by pupil constrictions and dilations. We find that the luminance–induced pupil oscillations are enhanced when covert attention is directed to the flicker stimulus and when targets are correctly detected in an attentional tracking task. These results suggest that the amplitudes of pupil responses closely follow the allocation of focal visual attention and the encoding of stimuli. PFT provides a new opportunity to study top–down visual attention itself as well as identifying the pathways and mechanisms that support this unexpected phenomenon. PMID:24368904
Robust Visual Tracking Revisited: From Correlation Filter to Template Matching.
Liu, Fanghui; Gong, Chen; Huang, Xiaolin; Zhou, Tao; Yang, Jie; Tao, Dacheng
2018-06-01
In this paper, we propose a novel matching based tracker by investigating the relationship between template matching and the recent popular correlation filter based trackers (CFTs). Compared to the correlation operation in CFTs, a sophisticated similarity metric termed mutual buddies similarity is proposed to exploit the relationship of multiple reciprocal nearest neighbors for target matching. By doing so, our tracker obtains powerful discriminative ability on distinguishing target and background as demonstrated by both empirical and theoretical analyses. Besides, instead of utilizing single template with the improper updating scheme in CFTs, we design a novel online template updating strategy named memory, which aims to select a certain amount of representative and reliable tracking results in history to construct the current stable and expressive template set. This scheme is beneficial for the proposed tracker to comprehensively understand the target appearance variations, recall some stable results. Both qualitative and quantitative evaluations on two benchmarks suggest that the proposed tracking method performs favorably against some recently developed CFTs and other competitive trackers.
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.
Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng
2016-09-07
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.
A new method for tracking organ motion on diagnostic ultrasound images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubota, Yoshiki, E-mail: y-kubota@gunma-u.ac.jp; Matsumura, Akihiko, E-mail: matchan.akihiko@gunma-u.ac.jp; Fukahori, Mai, E-mail: fukahori@nirs.go.jp
2014-09-15
Purpose: Respiratory-gated irradiation is effective in reducing the margins of a target in the case of abdominal organs, such as the liver, that change their position as a result of respiratory motion. However, existing technologies are incapable of directly measuring organ motion in real-time during radiation beam delivery. Hence, the authors proposed a novel quantitative organ motion tracking method involving the use of diagnostic ultrasound images; it is noninvasive and does not entail radiation exposure. In the present study, the authors have prospectively evaluated this proposed method. Methods: The method involved real-time processing of clinical ultrasound imaging data rather thanmore » organ monitoring; it comprised a three-dimensional ultrasound device, a respiratory sensing system, and two PCs for data storage and analysis. The study was designed to evaluate the effectiveness of the proposed method by tracking the gallbladder in one subject and a liver vein in another subject. To track a moving target organ, the method involved the control of a region of interest (ROI) that delineated the target. A tracking algorithm was used to control the ROI, and a large number of feature points and an error correction algorithm were used to achieve long-term tracking of the target. Tracking accuracy was assessed in terms of how well the ROI matched the center of the target. Results: The effectiveness of using a large number of feature points and the error correction algorithm in the proposed method was verified by comparing it with two simple tracking methods. The ROI could capture the center of the target for about 5 min in a cross-sectional image with changing position. Indeed, using the proposed method, it was possible to accurately track a target with a center deviation of 1.54 ± 0.9 mm. The computing time for one frame image using our proposed method was 8 ms. It is expected that it would be possible to track any soft-tissue organ or tumor with large deformations and changing cross-sectional position using this method. Conclusions: The proposed method achieved real-time processing and continuous tracking of the target organ for about 5 min. It is expected that our method will enable more accurate radiation treatment than is the case using indirect observational methods, such as the respiratory sensor method, because of direct visualization of the tumor. Results show that this tracking system facilitates safe treatment in clinical practice.« less
A comparison of visual and kinesthetic-tactual displays for compensatory tracking
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.
1983-01-01
Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under certain conditions it can be an effective alternative or supplement to visual displays. In order to understand better how KT tracking compares with visual tracking, both a critical tracking and stationary single-axis tracking tasks were conducted with and without velocity quickening. In the critical tracking task, the visual displays were superior, however, the quickened KT display was approximately equal to the unquickened visual display. In stationary tracking tasks, subjects adopted lag equalization with the quickened KT and visual displays, and mean-squared error scores were approximately equal. With the unquickened displays, subjects adopted lag-lead equalization, and the visual displays were superior. This superiority was partly due to the servomotor lag in the implementation of the KT display and partly due to modality differences.
A comparison of tracking with visual and kinesthetic-tactual displays
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.
1981-01-01
Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under appropriate conditions it may be an effective means of providing visual workload relief. In order to better understand how KT tracking differs from visual tracking, both a critical tracking task and stationary single-axis tracking tasks were conducted with and without velocity quickening. On the critical tracking task, the visual displays were superior; however, the KT quickened display was approximately equal to the visual unquickened display. Mean squared error scores in the stationary tracking tasks for the visual and KT displays were approximately equal in the quickened conditions, and the describing functions were very similar. In the unquickened conditions, the visual display was superior. Subjects using the unquickened KT display exhibited a low frequency lead-lag that may be related to sensory adaptation.
Autonomous Visual Tracking of Stationary Targets Using Small Unmanned Aerial Vehicles
2004-06-01
59 Figure 43. Commanded and Actual Yaw Rates during Simulation ..................................60 Figure 44. Setup for Hardware In Loop Simulation...System with AVDS Figure 44. Setup for Hardware In Loop Simulation with AVDS and PerceptiVU 2. Test Conditions Simulations were conducted for the
Eye Tracking Dysfunction in Schizophrenia: Characterization and Pathophysiology
Sereno, Anne B.; Gooding, Diane C.; O’Driscoll, Gilllian A.
2011-01-01
Eye tracking dysfunction (ETD) is one of the most widely replicated behavioral deficits in schizophrenia and is over-represented in clinically unaffected first-degree relatives of schizophrenia patients. Here, we provide an overview of research relevant to the characterization and pathophysiology of this impairment. Deficits are most robust in the maintenance phase of pursuit, particularly during the tracking of predictable target movement. Impairments are also found in pursuit initiation and correlate with performance on tests of motion processing, implicating early sensory processing of motion signals. Taken together, the evidence suggests that ETD involves higher-order structures, including the frontal eye fields, which adjust the gain of the pursuit response to visual and anticipated target movement, as well as early parts of the pursuit pathway, including motion areas (the middle temporal area and the adjacent medial superior temporal area). Broader application of localizing behavioral paradigms in patient and family studies would be advantageous for refining the eye tracking phenotype for genetic studies. PMID:21312405
Construction and testing of a Scanning Laser Radar (SLR), phase 2
NASA Technical Reports Server (NTRS)
Flom, T.; Coombes, H. D.
1971-01-01
The scanning laser radar overall system is described. Block diagrams and photographs of the hardware are included with the system description. Detailed descriptions of all the subsystems that make up the scanning laser radar system are included. Block diagrams, photographs, and detailed optical and electronic schematics are used to help describe such subsystem hardware as the laser, beam steerer, receiver optics and detector, control and processing electronics, visual data displays, and the equipment used on the target. Tests were performed on the scanning laser radar to determine its acquisition and tracking performance and to determine its range and angle accuracies while tracking a moving target. The tests and test results are described.
A Near Term Approach to Embedded Training: Battle Command Visualization 101
2006-05-01
Example of Task Guidance area at the end of Module 1, Exercise 1 (Level 1) ...... 26 Figure 9 Example of Robo -Scout information, accessed through the...unmanned ground vehicle (UGV) sensor, the Robo -Scout. They will be equipped with both passive and active systems to provide target acquisition, real...PIR# Track of Update RqieTemplated Required 1 Tracked & NAI 1, 2 S, T S200 synthetic L 2 hrs Vehicle type, Intel, Fires (PIR 1) Wheeled aperture radar
NASA Astrophysics Data System (ADS)
Bagheri, Zahra M.; Cazzolato, Benjamin S.; Grainger, Steven; O'Carroll, David C.; Wiederman, Steven D.
2017-08-01
Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from ‘small target motion detector’ neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system.
Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.
Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie
2017-09-12
In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.
OpenCV and TYZX : video surveillance for tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Jim; Spencer, Andrew; Chu, Eric
2008-08-01
As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processingmore » solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.« less
Predictive encoding of moving target trajectory by neurons in the parabigeminal nucleus
Ma, Rui; Cui, He; Lee, Sang-Hun; Anastasio, Thomas J.
2013-01-01
Intercepting momentarily invisible moving objects requires internally generated estimations of target trajectory. We demonstrate here that the parabigeminal nucleus (PBN) encodes such estimations, combining sensory representations of target location, extrapolated positions of briefly obscured targets, and eye position information. Cui and Malpeli (Cui H, Malpeli JG. J Neurophysiol 89: 3128–3142, 2003) reported that PBN activity for continuously visible tracked targets is determined by retinotopic target position. Here we show that when cats tracked moving, blinking targets the relationship between activity and target position was similar for ON and OFF phases (400 ms for each phase). The dynamic range of activity evoked by virtual targets was 94% of that of real targets for the first 200 ms after target offset and 64% for the next 200 ms. Activity peaked at about the same best target position for both real and virtual targets. PBN encoding of target position takes into account changes in eye position resulting from saccades, even without visual feedback. Since PBN response fields are retinotopically organized, our results suggest that activity foci associated with real and virtual targets at a given target position lie in the same physical location in the PBN, i.e., a retinotopic as well as a rate encoding of virtual-target position. We also confirm that PBN activity is specific to the intended target of a saccade and is predictive of which target will be chosen if two are offered. A Bayesian predictor-corrector model is presented that conceptually explains the differences in the dynamic ranges of PBN neuronal activity evoked during tracking of real and virtual targets. PMID:23365185
Guidance for Development of a Flight Simulator Specification
2007-05-01
the simulated line of sight to the moon is less than one degree, and that the moon appears to move smoothly across the visual scene. The phase of the...Agencies have adopted the definition used by Optics Companies (this definition has also been adopted in this revision of the Air Force Guide...simulators that require tracking the target as it slues across the displayed scene, such as with air -to-ground or air -to- air combat tasks. Visual systems
Visual pop-out in barn owls: Human-like behavior in the avian brain.
Orlowski, Julius; Beissel, Christian; Rohn, Friederike; Adato, Yair; Wagner, Hermann; Ben-Shahar, Ohad
2015-01-01
Visual pop-out is a phenomenon by which the latency to detect a target in a scene is independent of the number of other elements, the distractors. Pop-out is an effective visual-search guidance that occurs typically when the target is distinct in one feature from the distractors, thus facilitating fast detection of predators or prey. However, apart from studies on primates, pop-out has been examined in few species and demonstrated thus far in rats, archer fish, and pigeons only. To fill this gap, here we study pop-out in barn owls. These birds are a unique model system for such exploration because their lack of eye movements dictates visual behavior dominated by head movements. Head saccades and interspersed fixation periods can therefore be tracked and analyzed with a head-mounted wireless microcamera--the OwlCam. Using this methodology we confronted two owls with scenes containing search arrays of one target among varying numbers (15-63) of similar looking distractors. We tested targets distinct either by orientation (Experiment 1) or luminance contrast (Experiment 2). Search time and the number of saccades until the target was fixated remained largely independent of the number of distractors in both experiments. This suggests that barn owls can exhibit pop-out during visual search, thus expanding the group of species and brain structures that can cope with this fundamental visual behavior. The utility of our automatic analysis method is further discussed for other species and scientific questions.
Quétard, Boris; Quinton, Jean-Charles; Colomb, Michèle; Pezzulo, Giovanni; Barca, Laura; Izaute, Marie; Appadoo, Owen Kevin; Mermillod, Martial
2015-09-01
Detecting a pedestrian while driving in the fog is one situation where the prior expectation about the target presence is integrated with the noisy visual input. We focus on how these sources of information influence the oculomotor behavior and are integrated within an underlying decision-making process. The participants had to judge whether high-/low-density fog scenes displayed on a computer screen contained a pedestrian or a deer by executing a mouse movement toward the response button (mouse-tracking). A variable road sign was added on the scene to manipulate expectations about target identity. We then analyzed the timing and amplitude of the deviation of mouse trajectories toward the incorrect response and, using an eye tracker, the detection time (before fixating the target) and the identification time (fixations on the target). Results revealed that expectation of the correct target results in earlier decisions with less deviation toward the alternative response, this effect being partially explained by the facilitation of target identification.
Automated 3D trajectory measuring of large numbers of moving particles.
Wu, Hai Shan; Zhao, Qi; Zou, Danping; Chen, Yan Qiu
2011-04-11
Complex dynamics of natural particle systems, such as insect swarms, bird flocks, fish schools, has attracted great attention of scientists for years. Measuring 3D trajectory of each individual in a group is vital for quantitative study of their dynamic properties, yet such empirical data is rare mainly due to the challenges of maintaining the identities of large numbers of individuals with similar visual features and frequent occlusions. We here present an automatic and efficient algorithm to track 3D motion trajectories of large numbers of moving particles using two video cameras. Our method solves this problem by formulating it as three linear assignment problems (LAP). For each video sequence, the first LAP obtains 2D tracks of moving targets and is able to maintain target identities in the presence of occlusions; the second one matches the visually similar targets across two views via a novel technique named maximum epipolar co-motion length (MECL), which is not only able to effectively reduce matching ambiguity but also further diminish the influence of frequent occlusions; the last one links 3D track segments into complete trajectories via computing a globally optimal assignment based on temporal and kinematic cues. Experiment results on simulated particle swarms with various particle densities validated the accuracy and robustness of the proposed method. As real-world case, our method successfully acquired 3D flight paths of fruit fly (Drosophila melanogaster) group comprising hundreds of freely flying individuals. © 2011 Optical Society of America
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Predicting 2D target velocity cannot help 2D motion integration for smooth pursuit initiation.
Montagnini, Anna; Spering, Miriam; Masson, Guillaume S
2006-12-01
Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.
Visual tracking based on the sparse representation of the PCA subspace
NASA Astrophysics Data System (ADS)
Chen, Dian-bing; Zhu, Ming; Wang, Hui-li
2017-09-01
We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.
Objective Methods to Test Visual Dysfunction in the Presence of Cognitive Impairment
2015-12-01
the eye and 3) purposeful eye movements to track targets that are resolved. Major Findings: Three major objective tests of vision were successfully...developed and optimized to detect disease. These were 1) the pupil light reflex (either comparing the two eyes or independently evaluating each eye ...separately for retina or optic nerve damage, 2) eye movement based analysis of target acquisition, fixation, and eccentric viewing as a means of
Brockhoff, Alisa; Huff, Markus
2016-10-01
Multiple object tracking (MOT) plays a fundamental role in processing and interpreting dynamic environments. Regarding the type of information utilized by the observer, recent studies reported evidence for the use of object features in an automatic, low- level manner. By introducing a novel paradigm that allowed us to combine tracking with a noninterfering top-down task, we tested whether a voluntary component can regulate the deployment of attention to task-relevant features in a selective manner. In four experiments we found conclusive evidence for a task-driven selection mechanism that guides attention during tracking: The observers were able to ignore or prioritize distinct objects. They marked the distinct (cued) object (target/distractor) more or less often than other objects of the same type (targets /distractors)-but only when they had received an identification task that required them to actively process object features (cues) during tracking. These effects are discussed with regard to existing theoretical approaches to attentive tracking, gaze-cue usability as well as attentional readiness, a term that originally stems from research on attention capture and visual search. Our findings indicate that existing theories of MOT need to be adjusted to allow for flexible top-down, voluntary processing during tracking.
Object acquisition and tracking for space-based surveillance
NASA Astrophysics Data System (ADS)
1991-11-01
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-27
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less
STS-47 Payload Specialist Mohri conducts visual stability experiment in SLJ
1992-09-20
STS047-204-006 (12 - 20 Sept 1992) --- Dr. Mamoru Mohri, payload specialist representing Japan's National Space Development Agency (NASDA), participates in an experiment designed to learn more about Space Adaptation Syndrome (SAS). The experiment is titled, "Comparative Measurement of Visual Stability in Earth and Cosmic Space." During the experiment, Dr. Mohri tracked a flickering light target while eye movements and neck muscle tension were measured. This 45-degree angle position was one of four studied during the eight-day Spacelab-J mission.
Grapheme-color synesthesia influences overt visual attention.
Carriere, Jonathan S A; Eaton, Daniel; Reynolds, Michael G; Dixon, Mike J; Smilek, Daniel
2009-02-01
For individuals with grapheme-color synesthesia, achromatic letters and digits elicit vivid perceptual experiences of color. We report two experiments that evaluate whether synesthesia influences overt visual attention. In these experiments, two grapheme-color synesthetes viewed colored letters while their eye movements were monitored. Letters were presented in colors that were either congruent or incongruent with the synesthetes' colors. Eye tracking analysis showed that synesthetes exhibited a color congruity bias-a propensity to fixate congruently colored letters more often and for longer durations than incongruently colored letters-in a naturalistic free-viewing task. In a more structured visual search task, this congruity bias caused synesthetes to rapidly fixate and identify congruently colored target letters, but led to problems in identifying incongruently colored target letters. The results are discussed in terms of their implications for perception in synesthesia.
A visual tracking method based on improved online multiple instance learning
NASA Astrophysics Data System (ADS)
He, Xianhui; Wei, Yuxing
2016-09-01
Visual tracking is an active research topic in the field of computer vision and has been well studied in the last decades. The method based on multiple instance learning (MIL) was recently introduced into the tracking task, which can solve the problem that template drift well. However, MIL method has relatively poor performance in running efficiency and accuracy, due to its strong classifiers updating strategy is complicated, and the speed of the classifiers update is not always same with the change of the targets' appearance. In this paper, we present a novel online effective MIL (EMIL) tracker. A new update strategy for strong classifier was proposed to improve the running efficiency of MIL method. In addition, to improve the t racking accuracy and stability of the MIL method, a new dynamic mechanism for learning rate renewal of the classifier and variable search window were proposed. Experimental results show that our method performs good performance under the complex scenes, with strong stability and high efficiency.
NASA Astrophysics Data System (ADS)
Potter, Michael; Bensch, Alexander; Dawson-Elli, Alexander; Linte, Cristian A.
2015-03-01
In minimally invasive surgical interventions direct visualization of the target area is often not available. Instead, clinicians rely on images from various sources, along with surgical navigation systems for guidance. These spatial localization and tracking systems function much like the Global Positioning Systems (GPS) that we are all well familiar with. In this work we demonstrate how the video feed from a typical camera, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the imaged surface, resulting in a simple augmented reality environment. This paper describes the software and hardware environment and methodology for augmenting the real world with virtual models extracted from medical images to provide enhanced visualization beyond the surface view achieved using traditional imaging. Following intrinsic and extrinsic camera calibration, the technique was implemented and demonstrated using a LEGO structure phantom, as well as a 3D-printed patient-specific left atrial phantom. We assessed the quality of the overlay according to fiducial localization, fiducial registration, and target registration errors, as well as the overlay offset error. Using the software extensions we developed in conjunction with common webcams it is possible to achieve tracking accuracy comparable to that seen with significantly more expensive hardware, leading to target registration errors on the order of 2 mm.
Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion.
Spering, Miriam; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R
2011-04-01
Success of motor behavior often depends on the ability to predict the path of moving objects. Here we asked whether tracking a visual object with smooth pursuit eye movements helps to predict its motion direction. We developed a paradigm, "eye soccer," in which observers had to either track or fixate a visual target (ball) and judge whether it would have hit or missed a stationary vertical line segment (goal). Ball and goal were presented briefly for 100-500 ms and disappeared from the screen together before the perceptual judgment was prompted. In pursuit conditions, the ball moved towards the goal; in fixation conditions, the goal moved towards the stationary ball, resulting in similar retinal stimulation during pursuit and fixation. We also tested the condition in which the goal was fixated and the ball moved. Motion direction prediction was significantly better in pursuit than in fixation trials, regardless of whether ball or goal served as fixation target. In both fixation and pursuit trials, prediction performance was better when eye movements were accurate. Performance also increased with shorter ball-goal distance and longer presentation duration. A longer trajectory did not affect performance. During pursuit, an efference copy signal might provide additional motion information, leading to the advantage in motion prediction.
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.
1979-01-01
The feasibility of using the critical tracking task to evaluate kinesthetic-tactual displays was examined. The test subjects were asked to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. The results indicate that the critical tracking task is both a feasible and a reliable methodology for assessing tactual tracking. Further, that the critical tracking methodology is as sensitive and valid a measure of tactual tracking as visual tracking is demonstrated by the approximately equal effects of quickening for the tactual and visual displays.
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters
Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng
2016-01-01
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.
Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-08-23
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor
Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-01-01
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520
Tracking Honey Bees Using LIDAR (Light Detection and Ranging) Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
BENDER, SUSAN FAE ANN; RODACY, PHILIP J.; SCHMITT, RANDAL L.
The Defense Advanced Research Projects Agency (DARPA) has recognized that biological and chemical toxins are a real and growing threat to troops, civilians, and the ecosystem. The Explosives Components Facility at Sandia National Laboratories (SNL) has been working with the University of Montana, the Southwest Research Institute, and other agencies to evaluate the feasibility of directing honeybees to specific targets, and for environmental sampling of biological and chemical ''agents of harm''. Recent work has focused on finding and locating buried landmines and unexploded ordnance (UXO). Tests have demonstrated that honeybees can be trained to efficiently and accurately locate explosive signaturesmore » in the environment. However, it is difficult to visually track the bees and determine precisely where the targets are located. Video equipment is not practical due to its limited resolution and range. In addition, it is often unsafe to install such equipment in a field. A technology is needed to provide investigators with the standoff capability to track bees and accurately map the location of the suspected targets. This report documents Light Detection and Ranging (LIDAR) tests that were performed by SNL. These tests have shown that a LIDAR system can be used to track honeybees. The LIDAR system can provide both the range and coordinates of the target so that the location of buried munitions can be accurately mapped for subsequent removal.« less
A Novel Hybrid Mental Spelling Application Based on Eye Tracking and SSVEP-Based BCI
Stawicki, Piotr; Gembler, Felix; Rezeika, Aya; Volosyak, Ivan
2017-01-01
Steady state visual evoked potentials (SSVEPs)-based Brain-Computer interfaces (BCIs), as well as eyetracking devices, provide a pathway for re-establishing communication for people with severe disabilities. We fused these control techniques into a novel eyetracking/SSVEP hybrid system, which utilizes eye tracking for initial rough selection and the SSVEP technology for fine target activation. Based on our previous studies, only four stimuli were used for the SSVEP aspect, granting sufficient control for most BCI users. As Eye tracking data is not used for activation of letters, false positives due to inappropriate dwell times are avoided. This novel approach combines the high speed of eye tracking systems and the high classification accuracies of low target SSVEP-based BCIs, leading to an optimal combination of both methods. We evaluated accuracy and speed of the proposed hybrid system with a 30-target spelling application implementing all three control approaches (pure eye tracking, SSVEP and the hybrid system) with 32 participants. Although the highest information transfer rates (ITRs) were achieved with pure eye tracking, a considerable amount of subjects was not able to gain sufficient control over the stand-alone eye-tracking device or the pure SSVEP system (78.13% and 75% of the participants reached reliable control, respectively). In this respect, the proposed hybrid was most universal (over 90% of users achieved reliable control), and outperformed the pure SSVEP system in terms of speed and user friendliness. The presented hybrid system might offer communication to a wider range of users in comparison to the standard techniques. PMID:28379187
Saiki, Jun
2002-01-01
Research on change blindness and transsaccadic memory revealed that a limited amount of information is retained across visual disruptions in visual working memory. It has been proposed that visual working memory can hold four to five coherent object representations. To investigate their maintenance and transformation in dynamic situations, I devised an experimental paradigm called multiple-object permanence tracking (MOPT) that measures memory for multiple feature-location bindings in dynamic situations. Observers were asked to detect any color switch in the middle of a regular rotation of a pattern with multiple colored disks behind an occluder. The color-switch detection performance dramatically declined as the pattern rotation velocity increased, and this effect of object motion was independent of the number of targets. The MOPT task with various shapes and colors showed that color-shape conjunctions are not available in the MOPT task. These results suggest that even completely predictable motion severely reduces our capacity of object representations, from four to only one or two.
Using an auditory sensory substitution device to augment vision: evidence from eye movements.
Wright, Thomas D; Margolis, Aaron; Ward, Jamie
2015-03-01
Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes.
A ground moving target emergency tracking method for catastrophe rescue
NASA Astrophysics Data System (ADS)
Zhou, X.; Li, D.; Li, G.
2014-11-01
In recent years, great disasters happen now and then. Disaster management test the emergency operation ability of the government and society all over the world. Immediately after the occurrence of a great disaster (e.g., earthquake), a massive nationwide rescue and relief operation need to be kicked off instantly. In order to improve the organizations efficiency of the emergency rescue, the organizers need to take charge of the information of the rescuer teams, including the real time location, the equipment with the team, the technical skills of the rescuers, and so on. One of the key factors for the success of emergency operations is the real time location of the rescuers dynamically. Real time tracking methods are used to track the professional rescuer teams now. But volunteers' participation play more and more important roles in great disasters. However, real time tracking of the volunteers will cause many problems, e.g., privacy leakage, expensive data consumption, etc. These problems may reduce the enthusiasm of volunteers' participation for catastrophe rescue. In fact, the great disaster is just small probability event, it is not necessary to track the volunteers (even rescuer teams) every time every day. In order to solve this problem, a ground moving target emergency tracking method for catastrophe rescue is presented in this paper. In this method, the handheld devices using GPS technology to provide the location of the users, e.g., smart phone, is used as the positioning equipment; an emergency tracking information database including the ID of the ground moving target (including the rescuer teams and volunteers), the communication number of the handheld devices with the moving target, and the usually living region, etc., is built in advance by registration; when catastrophe happens, the ground moving targets that living close to the disaster area will be filtered by the usually living region; then the activation short message will be sent to the selected ground moving target through the communication number of the handheld devices. The handheld devices receive and identify the activation short message, and send the current location information to the server. Therefore, the emergency tracking mode is triggered. The real time location of the filtered target can be shown on the organizer's screen, and the organizer can assign the rescue tasks to the rescuer teams and volunteers based on their real time location. The ground moving target emergency tracking prototype system is implemented using Oracle 11g, Visual Studio 2010 C#, Android, SMS Modem, and Google Maps API.
Brocher, Andreas; Harbecke, Raphael; Graf, Tim; Memmert, Daniel; Hüttermann, Stefanie
2018-03-07
We tested the link between pupil size and the task effort involved in covert shifts of visual attention. The goal of this study was to establish pupil size as a marker of attentional shifting in the absence of luminance manipulations. In three experiments, participants evaluated two stimuli that were presented peripherally, appearing equidistant from and on opposite sides of eye fixation. The angle between eye fixation and the peripherally presented target stimuli varied from 12.5° to 42.5°. The evaluation of more distant stimuli led to poorer performance than did the evaluation of more proximal stimuli throughout our study, confirming that the former required more effort than the latter. In addition, in Experiment 1 we found that pupil size increased with increasing angle and that this effect could not be reduced to the operation of low-level visual processes in the task. In Experiment 2 the pupil dilated more strongly overall when participants evaluated the target stimuli, which required shifts of attention, than when they merely reported on the target's presence versus absence. Both conditions yielded larger pupils for more distant than for more proximal stimuli, however. In Experiment 3, we manipulated task difficulty more directly, by changing the contrast at which the target stimuli were presented. We replicated the results from Experiment 1 only with the high-contrast stimuli. With stimuli of low contrast, ceiling effects in pupil size were observed. Our data show that the link between task effort and pupil size can be used to track the degree to which an observer covertly shifts attention to or detects stimuli in peripheral vision.
The effects of control-display gain on performance of race car drivers in an isometric braking task.
de Winter, J C F; de Groot, S
2012-12-01
To minimise lap times during car racing, it is important to build up brake forces rapidly and maintain precise control. We examined the effect of the amplification factor (gain) between brake pedal force and a visually represented output value on a driver's ability to track a target value. The test setup was a formula racing car cockpit fitted with an isometric brake pedal. Thirteen racing drivers performed tracking tasks with four control-display gains and two target functions: a step function (35 trials per gain) and a multisine function (15 trials per gain). The control-display gain had only minor effects on root mean-squared error between output value and target value, but it had large effects on build-up speed, overshoot, within-participants variability, and self-reported physical load. The results confirm the hypothesis that choosing an optimum gain involves balancing stability against physical effort.
Suppression of biodynamic interference in head-tracked teleoperation
NASA Technical Reports Server (NTRS)
Lifshitz, S.; Merhav, S. J.; Grunwald, A. J.; Tucker, G. E.; Tischler, M. B.
1991-01-01
The utility of helmet-tracked sights to provide pointing commands for teleoperation of cameras, lasers, or antennas in aircraft is degraded by the presence of uncommanded, involuntary heat motion, referred to as biodynamic interference. This interference limits the achievable precision required in pointing tasks. The noise contributions due to biodynamic interference consists of an additive component which is correlated with aircraft vibration and an uncorrelated, nonadditive component, referred to as remnant. An experimental simulation study is described which investigated the improvements achievable in pointing and tracking precision using dynamic display shifting in the helmet-mounted display. The experiment was conducted in a six degree of freedom motion base simulator with an emulated helmet-mounted display. Highly experienced pilot subjects performed precision head-pointing tasks while manually flying a visual flight-path tracking task. Four schemes using adaptive and low-pass filtering of the head motion were evaluated to determine their effects on task performance and pilot workload in the presence of whole-body vibration characteristic of helicopter flight. The results indicate that, for tracking tasks involving continuously moving targets, improvements of up to 70 percent can be achieved in percent on-target dwelling time and of up to 35 percent in rms tracking error, with the adaptive plus low-pass filter configuration. The results with the same filter configuration for the task of capturing randomly-positioned, stationary targets show an increase of up to 340 percent in the number of targets captured and an improvement of up to 24 percent in the average capture time. The adaptive plus low-pass filter combination was considered to exhibit the best overall display dynamics by each of the subjects.
Active Guidance of a Handheld Micromanipulator using Visual Servoing.
Becker, Brian C; Voros, Sandrine; Maclachlan, Robert A; Hager, Gregory D; Riviere, Cameron N
2009-05-12
In microsurgery, a surgeon often deals with anatomical structures of sizes that are close to the limit of the human hand accuracy. Robotic assistants can help to push beyond the current state of practice by integrating imaging and robot-assisted tools. This paper demonstrates control of a handheld tremor reduction micromanipulator with visual servo techniques, aiding the operator by providing three behaviors: snap-to, motion-scaling, and standoff-regulation. A stereo camera setup viewing the workspace under high magnification tracks the tip of the micromanipulator and the desired target object being manipulated. Individual behaviors activate in task-specific situations when the micromanipulator tip is in the vicinity of the target. We show that the snap-to behavior can reach and maintain a position at a target with an accuracy of 17.5 ± 0.4μm Root Mean Squared Error (RMSE) distance between the tip and target. Scaling the operator's motions and preventing unwanted contact with non-target objects also provides a larger margin of safety.
NASA Astrophysics Data System (ADS)
Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.
2007-03-01
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.
360-Degree Visual Detection and Target Tracking on an Autonomous Surface Vehicle
NASA Technical Reports Server (NTRS)
Wolf, Michael T; Assad, Christopher; Kuwata, Yoshiaki; Howard, Andrew; Aghazarian, Hrand; Zhu, David; Lu, Thomas; Trebi-Ollennu, Ashitey; Huntsberger, Terry
2010-01-01
This paper describes perception and planning systems of an autonomous sea surface vehicle (ASV) whose goal is to detect and track other vessels at medium to long ranges and execute responses to determine whether the vessel is adversarial. The Jet Propulsion Laboratory (JPL) has developed a tightly integrated system called CARACaS (Control Architecture for Robotic Agent Command and Sensing) that blends the sensing, planning, and behavior autonomy necessary for such missions. Two patrol scenarios are addressed here: one in which the ASV patrols a large harbor region and checks for vessels near a fixed asset on each pass and one in which the ASV circles a fixed asset and intercepts approaching vessels. This paper focuses on the ASV's central perception and situation awareness system, dubbed Surface Autonomous Visual Analysis and Tracking (SAVAnT), which receives images from an omnidirectional camera head, identifies objects of interest in these images, and probabilistically tracks the objects' presence over time, even as they may exist outside of the vehicle's sensor range. The integrated CARACaS/SAVAnT system has been implemented on U.S. Navy experimental ASVs and tested in on-water field demonstrations.
Top-down knowledge modulates onset capture in a feedforward manner.
Becker, Stefanie I; Lewis, Amanda J; Axtens, Jenna E
2017-04-01
How do we select behaviourally important information from cluttered visual environments? Previous research has shown that both top-down, goal-driven factors and bottom-up, stimulus-driven factors determine which stimuli are selected. However, it is still debated when top-down processes modulate visual selection. According to a feedforward account, top-down processes modulate visual processing even before the appearance of any stimuli, whereas others claim that top-down processes modulate visual selection only at a late stage, via feedback processing. In line with such a dual stage account, some studies found that eye movements to an irrelevant onset distractor are not modulated by its similarity to the target stimulus, especially when eye movements are launched early (within 150-ms post stimulus onset). However, in these studies the target transiently changed colour due to a colour after-effect that occurred during premasking, and the time course analyses were incomplete. The present study tested the feedforward account against the dual stage account in two eye tracking experiments, with and without colour after-effects (Exp. 1), as well when the target colour varied randomly and observers were informed of the target colour with a word cue (Exp. 2). The results showed that top-down processes modulated the earliest eye movements to the onset distractors (<150-ms latencies), without incurring any costs for selection of target matching distractors. These results unambiguously support a feedforward account of top-down modulation.
A model for the pilot's use of motion cues in roll-axis tracking tasks
NASA Technical Reports Server (NTRS)
Levison, W. H.; Junker, A. M.
1977-01-01
Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.
Control of articulated snake robot under dynamic active constraints.
Kwok, Ka-Wai; Vitiello, Valentina; Yang, Guang-Zhong
2010-01-01
Flexible, ergonomically enhanced surgical robots have important applications to transluminal endoscopic surgery, for which path-following and dynamic shape conformance are essential. In this paper, kinematic control of a snake robot for motion stabilisation under dynamic active constraints is addressed. The main objective is to enable the robot to track the visual target accurately and steadily on deforming tissue whilst conforming to pre-defined anatomical constraints. The motion tracking can also be augmented with manual control. By taking into account the physical limits in terms of maximum frequency response of the system (manifested as a delay between the input of the manipulator and the movement of the end-effector), we show the importance of visual-motor synchronisation for performing accurate smooth pursuit movements. Detailed user experiments are performed to demonstrate the practical value of the proposed control mechanism.
Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud
2017-01-01
In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
Scale-adaptive compressive tracking with feature integration
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin
2016-05-01
Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.
Nummenmaa, Lauri; Hietanen, Jari K.; Calvo, Manuel G.; Hyönä, Jukka
2011-01-01
An organism's survival depends crucially on its ability to detect and acquire nutriment. Attention circuits interact with cognitive and motivational systems to facilitate detection of salient sensory events in the environment. Here we show that the human attentional system is tuned to detect food targets among nonfood items. In two visual search experiments participants searched for discrepant food targets embedded in an array of nonfood distracters or vice versa. Detection times were faster when targets were food rather than nonfood items, and the detection advantage for food items showed a significant negative correlation with Body Mass Index (BMI). Also, eye tracking during searching within arrays of visually homogenous food and nonfood targets demonstrated that the BMI-contingent attentional bias was due to rapid capturing of the eyes by food items in individuals with low BMI. However, BMI was not associated with decision times after the discrepant food item was fixated. The results suggest that visual attention is biased towards foods, and that individual differences in energy consumption - as indexed by BMI - are associated with differential attentional effects related to foods. We speculate that such differences may constitute an important risk factor for gaining weight. PMID:21603657
Comparison of cap lamp and laser illumination for detecting visual escape cues in smoke
Lutz, T.J.; Sammarco, J.J.; Srednicki, J.R.; Gallagher, S.
2015-01-01
The Illuminating Engineering Society of North America reports that an underground mine is the most difficult environment to illuminate (Rea, 2000). Researchers at the U.S. National Institute for Occupational Safety and Health (NIOSH) Office of Mine Safety and Health Research (OMSHR) are conducting ongoing studies designed to explore different lighting technologies for improving mine safety. Underground miners use different visual cues to escape from a smoke-filled environment. Primary and secondary escapeways are marked with reflective ceiling tags of various colors. Miners also look for mine rail tracks. The main objective of this paper is to compare different lighting types and ceiling tag colors to differentiate what works best in a smoke-filled environment. Various cap lamps (LED and incandescent) and lasers (red, blue, green) were compared to see which options resulted in the longest detection distances for red, green and blue reflective markers and a section of mine rail track. All targets advanced toward the human subject inside of a smoke-filled room to simulate the subject walking in a mine environment. Detection distances were recorded and analyzed to find the best cap lamp, laser color and target color in a smoke environment. Results show that cap lamp, laser color and target color do make a difference in detection distances and are perceived differently based on subject age. Cap lamps were superior to lasers in all circumstances of ceiling tag detection, with the exception of the green laser. The incandescent cap lamp worked best in the simulated smoke compared to the LED cap lamps. The green laser was the best color for detecting the tags and track compared to the red and blue lasers. The green tags were the easiest color to detect on the ceiling. On average, the track was easier for the subjects to detect than the ceiling tags. PMID:26236146
Comparison of cap lamp and laser illumination for detecting visual escape cues in smoke.
Lutz, T J; Sammarco, J J; Srednicki, J R; Gallagher, S
The Illuminating Engineering Society of North America reports that an underground mine is the most difficult environment to illuminate (Rea, 2000). Researchers at the U.S. National Institute for Occupational Safety and Health (NIOSH) Office of Mine Safety and Health Research (OMSHR) are conducting ongoing studies designed to explore different lighting technologies for improving mine safety. Underground miners use different visual cues to escape from a smoke-filled environment. Primary and secondary escapeways are marked with reflective ceiling tags of various colors. Miners also look for mine rail tracks. The main objective of this paper is to compare different lighting types and ceiling tag colors to differentiate what works best in a smoke-filled environment. Various cap lamps (LED and incandescent) and lasers (red, blue, green) were compared to see which options resulted in the longest detection distances for red, green and blue reflective markers and a section of mine rail track. All targets advanced toward the human subject inside of a smoke-filled room to simulate the subject walking in a mine environment. Detection distances were recorded and analyzed to find the best cap lamp, laser color and target color in a smoke environment. Results show that cap lamp, laser color and target color do make a difference in detection distances and are perceived differently based on subject age. Cap lamps were superior to lasers in all circumstances of ceiling tag detection, with the exception of the green laser. The incandescent cap lamp worked best in the simulated smoke compared to the LED cap lamps. The green laser was the best color for detecting the tags and track compared to the red and blue lasers. The green tags were the easiest color to detect on the ceiling. On average, the track was easier for the subjects to detect than the ceiling tags.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ipsen, S; Bruder, R; Schweikard, A
Purpose: While MLC tracking has been successfully used for motion compensation of moving targets, current real-time target localization methods rely on correlation models with x-ray imaging or implanted electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging yields volumetric data in real-time (4D) without ionizing radiation. We report the first results of online 4D ultrasound-guided MLC tracking in a phantom. Methods: A real-time tracking framework was installed on a 4D ultrasound station (Vivid7 dimension, GE) and used to detect a 2mm spherical lead marker inside a water tank. The volumetric frame rate was 21.3Hz (47ms). The marker wasmore » rigidly attached to a motion stage programmed to reproduce nine tumor trajectories (five prostate, four lung). The 3D marker position from ultrasound was used for real-time MLC aperture adaption. The tracking system latency was measured and compensated by prediction for lung trajectories. To measure geometric accuracy, anterior and lateral conformal fields with 10cm circular aperture were delivered for each trajectory. The tracking error was measured as the difference between marker position and MLC aperture in continuous portal imaging. For dosimetric evaluation, 358° VMAT fields were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using a 3%/3 mm γ-test. Results: The tracking system latency was 170ms. The mean root-mean-square tracking error was 1.01mm (0.75mm prostate, 1.33mm lung). Tracking reduced the mean γ-failure rate from 13.9% to 4.6% for prostate and from 21.8% to 0.6% for lung with high-modulation VMAT plans and from 5% (prostate) and 18% (lung) to 0% with low modulation. Conclusion: Real-time ultrasound tracking was successfully integrated with MLC tracking for the first time and showed similar accuracy and latency as other methods while holding the potential to measure target motion non-invasively. SI was supported by the Graduate School for Computing in Medicine and Life Science, German Excellence Initiative [grant DFG GSC 235/1].« less
Sinusoidal visuomotor tracking: intermittent servo-control or coupled oscillations?
Russell, D M; Sternad, D
2001-12-01
In visuomotor tasks that involve accuracy demands, small directional changes in the trajectories have been taken as evidence of feedback-based error corrections. In the present study variability, or intermittency, in visuomanual tracking of sinusoidal targets was investigated. Two lines of analyses were pursued: First, the hypothesis that humans fundamentally act as intermittent servo-controllers was re-examined, probing the question of whether discontinuities in the movement trajectory directly imply intermittent control. Second, an alternative hypothesis was evaluated: that rhythmic tracking movements are generated by entrainment between the oscillations of the target and the actor, such that intermittency expresses the degree of stability. In 2 experiments, participants (N = 6 in each experiment) swung 1 of 2 different hand-held pendulums, tracking a rhythmic target that oscillated at different frequencies with a constant amplitude. In 1 line of analyses, the authors tested the intermittency hypothesis by using the typical kinematic error measures and spectral analysis. In a 2nd line, they examined relative phase and its variability, following analyses of rhythmic interlimb coordination. The results showed that visually guided corrective processes play a role, especially for slow movements. Intermittency, assessed as frequency and power components of the movement trajectory, was found to change as a function of both target frequency and the manipulandum's inertia. Support for entrainment was found in conditions in which task frequency was identical to or higher than the effector's eigenfrequency. The results suggest that it is the symmetry between task and effector that determines which behavioral regime is dominant.
A software module for implementing auditory and visual feedback on a video-based eye tracking system
NASA Astrophysics Data System (ADS)
Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.
2016-05-01
We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.
Harris, Joseph A; Donohue, Sarah E; Schoenfeld, Mircea A; Hopf, Jens-Max; Heinze, Hans-Jochen; Woldorff, Marty G
2016-08-15
Reward-associated visual features have been shown to capture visual attention, evidenced in faster and more accurate behavioral performance, as well as in neural responses reflecting lateralized shifts of visual attention to those features. Specifically, the contralateral N2pc event-related-potential (ERP) component that reflects attentional shifting exhibits increased amplitude in response to task-relevant targets containing a reward-associated feature. In the present study, we examined the automaticity of such reward-association effects using object-substitution masking (OSM) in conjunction with MEG measures of visual attentional shifts. In OSM, a visual-search array is presented, with the target item to be detected indicated by a surrounding mask (here, four surrounding squares). Delaying the offset of the target-surrounding four-dot mask relative to the offset of the rest of the target/distracter array disrupts the viewer's awareness of the target (masked condition), whereas simultaneous offsets do not (unmasked condition). Here we manipulated whether the color of the OSM target was or was not of a previously reward-associated color. By tracking reward-associated enhancements of behavior and the N2pc in response to masked targets containing a previously rewarded or unrewarded feature, the automaticity of attentional capture by reward could be probed. We found an enhanced N2pc response to targets containing a previously reward-associated color feature. Moreover, this enhancement of the N2pc by reward did not differ between masking conditions, nor did it differ as a function of the apparent visibility of the target within the masked condition. Overall, these results underscore the automaticity of attentional capture by reward-associated features, and demonstrate the ability of feature-based reward associations to shape attentional capture and allocation outside of perceptual awareness. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Uhlemann, H.; Geiser, G.
1975-01-01
Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.
Visual Detection and Tracking System for a Spherical Amphibious Robot
Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun
2017-01-01
With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation. PMID:28420134
Visual Detection and Tracking System for a Spherical Amphibious Robot.
Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun
2017-04-15
With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.; Riley, D. R.
1978-01-01
The effect of secondary tasks in determining permissible time delays in visual-motion simulation of a pursuit tracking task was examined. A single subject, a single set of aircraft handling qualities, and a single motion condition in tracking a target aircraft that oscillates sinusoidally in altitude were used. In addition to the basic simulator delays the results indicate that the permissible time delay is about 250 msec for either a tapping task, an adding task, or an audio task and is approximately 125 msec less than when no secondary task is involved. The magnitudes of the primary task performance measures, however, differ only for the tapping task. A power spectraldensity analysis basically confirms the result by comparing the root-mean-square performance measures. For all three secondary tasks, the total pilot workload was quite high.
A low-cost test-bed for real-time landmark tracking
NASA Astrophysics Data System (ADS)
Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher
2007-04-01
A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.
ERIC Educational Resources Information Center
van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.
2017-01-01
Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…
Nicotinic Receptor Gene CHRNA4 Interacts with Processing Load in Attention
Espeseth, Thomas; Sneve, Markus Handal; Rootwelt, Helge; Laeng, Bruno
2010-01-01
Background Pharmacological studies suggest that cholinergic neurotransmission mediates increases in attentional effort in response to high processing load during attention demanding tasks [1]. Methodology/Principal Findings In the present study we tested whether individual variation in CHRNA4, a gene coding for a subcomponent in α4β2 nicotinic receptors in the human brain, interacted with processing load in multiple-object tracking (MOT) and visual search (VS). We hypothesized that the impact of genotype would increase with greater processing load in the MOT task. Similarly, we predicted that genotype would influence performance under high but not low load in the VS task. Two hundred and two healthy persons (age range = 39–77, Mean = 57.5, SD = 9.4) performed the MOT task in which twelve identical circular objects moved about the display in an independent and unpredictable manner. Two to six objects were designated as targets and the remaining objects were distracters. The same observers also performed a visual search for a target letter (i.e. X or Z) presented together with five non-targets while ignoring centrally presented distracters (i.e. X, Z, or L). Targets differed from non-targets by a unique feature in the low load condition, whereas they shared features in the high load condition. CHRNA4 genotype interacted with processing load in both tasks. Homozygotes for the T allele (N = 62) had better tracking capacity in the MOT task and identified targets faster in the high load trials of the VS task. Conclusion The results support the hypothesis that the cholinergic system modulates attentional effort, and that common genetic variation can be used to study the molecular biology of cognition. PMID:21203548
Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation
NASA Technical Reports Server (NTRS)
Cunningham, H. A.; Welch, Robert B.
1994-01-01
Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.
NASA Astrophysics Data System (ADS)
Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent
2003-10-01
In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.
Visual Attention Patterns of Women with Androphilic and Gynephilic Sexual Attractions.
Dawson, Samantha J; Fretz, Katherine M; Chivers, Meredith L
2017-01-01
Women who report exclusive sexual attractions to men (i.e., androphilia) exhibit gender-nonspecific patterns of sexual response-similar magnitude of genital response to both male and female targets. Interestingly, women reporting any degree of attraction to women (i.e., gynephilia) show significantly greater sexual responses to stimuli depicting female targets compared to male targets. At present, the mechanism(s) underlying these patterns are unknown. According to the information processing model (IPM), attentional processing of sexual cues initiates sexual responding; thus, attention to sexual cues may be one mechanism to explain the observed within-gender differences in specificity findings among women. The purpose of the present study was to examine patterns of initial and controlled visual attention among women with varying sexual attractions. We used eye tracking to assess visual attention to sexually preferred and nonpreferred cues in a sample of 164 women who differed in their degree of androphilia and gynephilia. We found that both exclusively and predominantly androphilic women showed gender-nonspecific patterns of initial attention. In contrast, ambiphilic (i.e., concurrent androphilia and gynephilia) and predominantly/exclusively gynephilic women oriented more quickly toward female targets. Controlled attention patterns mirrored patterns of self-reported sexual attractions for three of these four groups of women, such that gender-specific patterns of visual attention were found for androphilic and gynephilic women. Ambiphilic women looked significantly longer at female targets compared to male targets. These findings support predictions from the IPM and suggest that both initial and controlled attention to sexual cues may be mechanisms contributing to within-gender variation in sexual responding.
Integrated framework for developing search and discrimination metrics
NASA Astrophysics Data System (ADS)
Copeland, Anthony C.; Trivedi, Mohan M.
1997-06-01
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
Selective attention in an insect visual neuron.
Wiederman, Steven D; O'Carroll, David C
2013-01-21
Animals need attention to focus on one target amid alternative distracters. Dragonflies, for example, capture flies in swarms comprising prey and conspecifics, a feat that requires neurons to select one moving target from competing alternatives. Diverse evidence, from functional imaging and physiology to psychophysics, highlights the importance of such "competitive selection" in attention for vertebrates. Analogous mechanisms have been proposed in artificial intelligence and even in invertebrates, yet direct neural correlates of attention are scarce from all animal groups. Here, we demonstrate responses from an identified dragonfly visual neuron that perfectly match a model for competitive selection within limits of neuronal variability (r(2) = 0.83). Responses to individual targets moving at different locations within the receptive field differ in both magnitude and time course. However, responses to two simultaneous targets exclusively track those for one target alone rather than any combination of the pair. Irrespective of target size, contrast, or separation, this neuron selects one target from the pair and perfectly preserves the response, regardless of whether the "winner" is the stronger stimulus if presented alone. This neuron is amenable to electrophysiological recordings, providing neuroscientists with a new model system for studying selective attention. Copyright © 2013 Elsevier Ltd. All rights reserved.
Vernat, Jean-Philippe; Gordon, Michael S
2010-02-01
This research examined the acoustic information used to support interceptive actions by the blind. Congenitally blind and severely visually impaired participants (all wearing an opaque, black eye-mask) were asked to listen to a target ball rolling down a track. In response, participants rolled their own ball along a perpendicular path to intercept the target. To better understand what information was used the echoic conditions and rolling dynamics of the target were varied across test sessions. In addition the rolling speed of the target and the distance of the participant from the target were varied across trials. Results demonstrated that participants tended to perform most accurately at moderate speeds and distances, overestimating the target's arrival at the fastest speed, and underestimating it at the slowest speed. However, changes to the target's dynamics, that is, the amount of deceleration it underwent on approach, did not strongly influence performance. Echoic conditions were found to affect performance, as participants were slightly more accurate in conditions with faster, higher-intensity echoes. Based on these results blind individuals in this research seemed to be using spatial and temporal cues to coordinate their interceptive actions.
Anticipatory synergy adjustments reflect individual performance of feedforward force control.
Togo, Shunta; Imamizu, Hiroshi
2016-10-06
We grasp and dexterously manipulate an object through multi-digit synergy. In the framework of the uncontrolled manifold (UCM) hypothesis, multi-digit synergy is defined as the coordinated control mechanism of fingers to stabilize variable important for task success, e.g., total force. Previous studies reported anticipatory synergy adjustments (ASAs) that correspond to a drop of the synergy index before a quick change of the total force. The present study compared ASA's properties with individual performances of feedforward force control to investigate a relationship of those. Subjects performed a total finger force production task that consisted of a phase in which subjects tracked target line with visual information and a phase in which subjects produced total force pulse without visual information. We quantified their multi-digit synergy through UCM analysis and observed significant ASAs before producing total force pulse. The time of the ASA initiation and the magnitude of the drop of the synergy index were significantly correlated with the error of force pulse, but not with the tracking error. Almost all subjects showed a significant increase of the variance that affected the total force. Our study directly showed that ASA reflects the individual performance of feedforward force control independently of target-tracking performance and suggests that the multi-digit synergy was weakened to adjust the multi-digit movements based on a prediction error so as to reduce the future error. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
78 FR 12825 - Petition for Extension of Waiver of Compliance
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-25
... the frequency of the required visual track inspections. FRA issued the initial waiver that granted.... SEPTA requests an extension of approval to reduce the frequency of required, visual track inspections... with continuous welded rail. SEPTA proposes to conduct one visual track inspection per week, instead of...
Puckett, Yana; Baronia, Benedicto C
2016-09-20
With the recent advances in eye tracking technology, it is now possible to track surgeons' eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis.
Optimal Appearance Model for Visual Tracking
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639
Evaluation of kinesthetic-tactual displays using a critical tracking task
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.; Ault, R. T.
1977-01-01
The study sought to investigate the feasibility of applying the critical tracking task paradigm to the evaluation of kinesthetic-tactual displays. Four subjects attempted to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. Display aiding was introduced in both modalities in the form of velocity quickening. Visual tracking performance was better than tactual tracking, and velocity aiding improved the critical tracking scores for visual and tactual tracking about equally. The results suggest that the critical task methodology holds considerable promise for evaluating kinesthetic-tactual displays.
Encoding color information for visual tracking: Algorithms and benchmark.
Liang, Pengpeng; Blasch, Erik; Ling, Haibin
2015-12-01
While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.
Optically tracked, single-coil, scanning magnetic induction tomography
NASA Astrophysics Data System (ADS)
Feldkamp, Joe R.; Quirk, Stephen
2017-03-01
Recent work has shown the feasibility of single-coil, magnetic induction tomography, for visualizing a 3D distribution of electrical conductivity in portions of the human body. Loss is measured in a single, planar coil consisting of concentric circular loops while the coil is relocated to various non-redundant positions and orientations in the vicinity of the target. These loss values, together with measured coil position and orientation, are processed by a quantitative mapping equation that enables reconstruction of an electrical conductivity image. Up until now, the position of the coil had to be established by a template, which required assignment of locations for the coil to visit without necessarily giving any prior consideration to target geometry. We have now added optical tracking to our existing single-coil device so that position and orientation are tracked automatically, allowing collection of coil loss data at arbitrary positions or orientations as needed. Optical tracking is accomplished via a set of IR reflective spheres mounted on the same enclosure that supports the coil. Position for a select sphere within the set, together with the four quaternions specifying optical body orientation, is fed to a laptop at the same time coil loss data is streamed to the same laptop via Bluetooth. The coil center can be tracked with sub-millimeter accuracy while orientation angle is known to a fraction of a degree. This work illustrates the use of single-coil MIT in full, position-orientation-tracked scan mode while imaging laboratory phantoms. Phantoms are based upon simple materials having biologic conductivity (< 5 S/m), including a cut of bone-in steak. The goal is not just to reconstruct an image that contains the features of the actual target, but also return correct conductivity values for the various features within the image.
Metacognition of Multi-Tasking: How Well Do We Predict the Costs of Divided Attention?
Finley, Jason R.; Benjamin, Aaron S.; McCarley, Jason S.
2014-01-01
Risky multi-tasking, such as texting while driving, may occur because people misestimate the costs of divided attention. In two experiments, participants performed a computerized visual-manual tracking task in which they attempted to keep a mouse cursor within a small target that moved erratically around a circular track. They then separately performed an auditory n-back task. After practicing both tasks separately, participants received feedback on their single-task tracking performance and predicted their dual-task tracking performance before finally performing the two tasks simultaneously. Most participants correctly predicted reductions in tracking performance under dual-task conditions, with a majority overestimating the costs of dual-tasking. However, the between-subjects correlation between predicted and actual performance decrements was near zero. This combination of results suggests that people do anticipate costs of multi-tasking, but have little metacognitive insight on the extent to which they are personally vulnerable to the risks of divided attention, relative to other people. PMID:24490818
Metacognition of multitasking: How well do we predict the costs of divided attention?
Finley, Jason R; Benjamin, Aaron S; McCarley, Jason S
2014-06-01
Risky multitasking, such as texting while driving, may occur because people misestimate the costs of divided attention. In two experiments, participants performed a computerized visual-manual tracking task in which they attempted to keep a mouse cursor within a small target that moved erratically around a circular track. They then separately performed an auditory n-back task. After practicing both tasks separately, participants received feedback on their single-task tracking performance and predicted their dual-task tracking performance before finally performing the 2 tasks simultaneously. Most participants correctly predicted reductions in tracking performance under dual-task conditions, with a majority overestimating the costs of dual-tasking. However, the between-subjects correlation between predicted and actual performance decrements was near 0. This combination of results suggests that people do anticipate costs of multitasking, but have little metacognitive insight on the extent to which they are personally vulnerable to the risks of divided attention, relative to other people. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Position Affects Performance in Multiple-Object Tracking in Rugby Union Players
Martín, Andrés; Sfer, Ana M.; D'Urso Villar, Marcela A.; Barraza, José F.
2017-01-01
We report an experiment that examines the performance of rugby union players and a control group composed of graduate student with no sport experience, in a multiple-object tracking task. It compares the ability of 86 high level rugby union players grouped as Backs and Forwards and the control group, to track a subset of randomly moving targets amongst the same number of distractors. Several difficulties were included in the experimental design in order to evaluate possible interactions between the relevant variables. Results show that the performance of the Backs is better than that of the other groups, but the occurrence of interactions precludes an isolated groups analysis. We interpret the results within the framework of visual attention and discuss both, the implications of our results and the practical consequences. PMID:28951725
Maximizing Impact: Pairing interactive web visualizations with traditional print media
NASA Astrophysics Data System (ADS)
Read, E. K.; Appling, A.; Carr, L.; De Cicco, L.; Read, J. S.; Walker, J. I.; Winslow, L. A.
2016-12-01
Our Nation's rapidly growing store of environmental data makes new demands on researchers: to take on increasingly broad-scale, societally relevant analyses and to rapidly communicate findings to the public. Interactive web-based data visualizations now commonly supplement or comprise journalism, and science journalism has followed suit. To maximize the impact of US Geological Survey (USGS) science, the USGS Office of Water Information Data Science team builds tools and products that combine traditional static research products (e.g., print journal articles) with web-based, interactive data visualizations that target non-scientific audiences. We developed a lightweight, open-source framework for web visualizations to reduce time to production. The framework provides templates for a data visualization workflow and the packaging of text, interactive figures, and images into an appealing web interface with standardized look and feel, usage tracking, and responsiveness. By partnering with subject matter experts to focus on timely, societally relevant issues, we use these tools to produce appealing visual stories targeting specific audiences, including managers, the general public, and scientists, on diverse topics including drought, microplastic pollution, and fisheries response to climate change. We will describe the collaborative and technical methodologies used; describe some examples of how it's worked; and challenges and opportunities for the future.
Acquiring Semantically Meaningful Models for Robotic Localization, Mapping and Target Recognition
2014-12-21
information, including suggesstions for reducing this burden, to Washington Headquarters Services , Directorate for Information Operations and Reports, 1215...Representations • Point features tracking • Recovery of relative motion, visual odometry • Loop closure • Environment models, sparse clouds of points...that co- occur with the object of interest Chair-Background Table-Background Object Level Segmentation Jaccard Index Silber .[5] 15.12 RenFox[4
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset
NASA Astrophysics Data System (ADS)
Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi
2017-11-01
Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.
Evaluation of helmet-mounted display targeting symbology based on eye tracking technology
NASA Astrophysics Data System (ADS)
Wang, Lijing; Wen, Fuzhen; Ma, Caixin; Zhao, Shengchu; Liu, Xiaodong
2014-06-01
The purpose of this paper is to find the Target Locator Lines (TLLs) which perform best by contrasting and comparing experiment based on three kinds of TTLs of fighter HMD. 10 university students, male, with an average age of 21-23, corrected visual acuity 1.5, participated in the experiment. In the experiment, head movement data was obtained by TrackIR. The geometric relationship between the coordinates of the real world and coordinates of the visual display was obtained by calculating the distance from viewpoint to midpoint of both eyes and the head movement data. Virtual helmet system simulation experiment environment was created by drawing TLLs of fighter HMD in the flight simulator visual scene. In the experiment, eye tracker was used to record the time and saccade trajectory. The results were evaluated by the duration of the time and saccade trajectory. The results showed that the symbol"locator line with digital vector length indication" cost most time and had the longest length of the saccade trajectory. It is the most ineffective and most unacceptable way. "Locator line with extending head vector length symbol" cost less time and had less length of the saccade trajectory. It is effective and acceptable;"Locator line with reflected vector length symbol" cost the least time and had the least length of the saccade trajectory. It is the most effective and most acceptable way. "Locator line with reflected vector length symbol" performs best. The results will provide reference value for the research of TTLs in future.
Self-paced model learning for robust visual tracking
NASA Astrophysics Data System (ADS)
Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin
2017-01-01
In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.
Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking
Qu, Shiru
2016-01-01
Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710
Applied estimation for hybrid dynamical systems using perceptional information
NASA Astrophysics Data System (ADS)
Plotnik, Aaron M.
This dissertation uses the motivating example of robotic tracking of mobile deep ocean animals to present innovations in robotic perception and estimation for hybrid dynamical systems. An approach to estimation for hybrid systems is presented that utilizes uncertain perceptional information about the system's mode to improve tracking of its mode and continuous states. This results in significant improvements in situations where previously reported methods of estimation for hybrid systems perform poorly due to poor distinguishability of the modes. The specific application that motivates this research is an automatic underwater robotic observation system that follows and films individual deep ocean animals. A first version of such a system has been developed jointly by the Stanford Aerospace Robotics Laboratory and Monterey Bay Aquarium Research Institute (MBARI). This robotic observation system is successfully fielded on MBARI's ROVs, but agile specimens often evade the system. When a human ROV pilot performs this task, one advantage that he has over the robotic observation system in these situations is the ability to use visual perceptional information about the target, immediately recognizing any changes in the specimen's behavior mode. With the approach of the human pilot in mind, a new version of the robotic observation system is proposed which is extended to (a) derive perceptional information (visual cues) about the behavior mode of the tracked specimen, and (b) merge this dissimilar, discrete and uncertain information with more traditional continuous noisy sensor data by extending existing algorithms for hybrid estimation. These performance enhancements are enabled by integrating techniques in hybrid estimation, computer vision and machine learning. First, real-time computer vision and classification algorithms extract a visual observation of the target's behavior mode. Existing hybrid estimation algorithms are extended to admit this uncertain but discrete observation, complementing the information available from more traditional sensors. State tracking is achieved using a new form of Rao-Blackwellized particle filter called the mode-observed Gaussian Particle Filter. Performance is demonstrated using data from simulation and data collected on actual specimens in the ocean. The framework for estimation using both traditional and perceptional information is easily extensible to other stochastic hybrid systems with mode-related perceptional observations available.
Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.
Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D; Kidd, Gerald
2016-01-01
The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".
Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians
Clayton, Kameron K.; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D.; Kidd, Gerald
2016-01-01
The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”. PMID:27384330
ERIC Educational Resources Information Center
Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.
2005-01-01
Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…
Baronia, Benedicto C
2016-01-01
With the recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis. PMID:27774359
Social Cognition as Reinforcement Learning: Feedback Modulates Emotion Inference.
Zaki, Jamil; Kallman, Seth; Wimmer, G Elliott; Ochsner, Kevin; Shohamy, Daphna
2016-09-01
Neuroscientific studies of social cognition typically employ paradigms in which perceivers draw single-shot inferences about the internal states of strangers. Real-world social inference features much different parameters: People often encounter and learn about particular social targets (e.g., friends) over time and receive feedback about whether their inferences are correct or incorrect. Here, we examined this process and, more broadly, the intersection between social cognition and reinforcement learning. Perceivers were scanned using fMRI while repeatedly encountering three social targets who produced conflicting visual and verbal emotional cues. Perceivers guessed how targets felt and received feedback about whether they had guessed correctly. Visual cues reliably predicted one target's emotion, verbal cues predicted a second target's emotion, and neither reliably predicted the third target's emotion. Perceivers successfully used this information to update their judgments over time. Furthermore, trial-by-trial learning signals-estimated using two reinforcement learning models-tracked activity in ventral striatum and ventromedial pFC, structures associated with reinforcement learning, and regions associated with updating social impressions, including TPJ. These data suggest that learning about others' emotions, like other forms of feedback learning, relies on domain-general reinforcement mechanisms as well as domain-specific social information processing.
Passive tracking scheme for a single stationary observer
NASA Astrophysics Data System (ADS)
Chan, Y. T.; Rea, Terry
2001-08-01
While there are many techniques for Bearings-Only Tracking (BOT) in the ocean environment, they do not apply directly to the land situation. Generally, for tactical reasons, the land observer platform is stationary; but, it has two sensors, visual and infrared, for measuring bearings and a laser range finder (LRF) for measuring range. There is a requirement to develop a new BOT data fusion scheme that fuses the two sets of bearing readings, and together with a single LRF measurement, produces a unique track. This paper first develops a parameterized solution for the target speeds, prior to the occurrence of the LRF measurement, when the problem is unobservable. At, and after, the LRF measurement, a BOT formulated as a least squares (LS) estimator then produces a unique LS estimate of the target states. Bearing readings from the other sensor serve as instrumental variables in a data fusion setting to eliminate the bias in the BOT estimator. The result is recursive, unbiased and decentralized data fusion scheme. Results from two simulation experiments have corroborated the theoretical development and show that the scheme is optimal.
Prete, Frederick R; Komito, Justin L; Dominguez, Salina; Svenson, Gavin; López, LeoLin Y; Guillen, Alex; Bogdanivich, Nicole
2011-09-01
We assessed the differences in appetitive responses to visual stimuli by three species of praying mantis (Insecta: Mantodea), Tenodera aridifolia sinensis, Mantis religiosa, and Cilnia humeralis. Tethered, adult females watched computer generated stimuli (erratically moving disks or linearly moving rectangles) that varied along predetermined parameters. Three responses were scored: tracking, approaching, and striking. Threshold stimulus size (diameter) for tracking and striking at disks ranged from 3.5 deg (C. humeralis) to 7.8 deg (M. religiosa), and from 3.3 deg (C. humeralis) to 11.7 deg (M. religiosa), respectively. Unlike the other species which struck at disks as large as 44 deg, T. a. sinensis displayed a preference for 14 deg disks. Disks moving at 143 deg/s were preferred by all species. M. religiosa exhibited the most approaching behavior, and with T. a. sinensis distinguished between rectangular stimuli moving parallel versus perpendicular to their long axes. C. humeralis did not make this distinction. Stimulus sizes that elicited the target behaviors were not related to mantis size. However, differences in compound eye morphology may be related to species differences: C. humeralis' eyes are farthest apart, and it has an apparently narrower binocular visual field which may affect retinal inputs to movement-sensitive visual interneurons.
Kriechbaumer, Thomas; Blackburn, Kim; Breckon, Toby P.; Hamilton, Oliver; Rivas Casado, Monica
2015-01-01
Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS). In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames) and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m) is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring. PMID:26694411
Paliwal, Bhudatt; Hill, Patrick; Bayouth, John E; Geurts, Mark W; Baschnagel, Andrew M; Bradley, Kristin A; Harari, Paul M; Rosenberg, Stephen; Brower, Jeffrey V; Wojcieszynski, Andrzej P; Hullett, Craig; Bayliss, R A; Labby, Zacariah E; Bassetti, Michael F
2018-01-01
Magnetic resonance-guided radiation therapy (MRgRT) offers advantages for image guidance for radiotherapy treatments as compared to conventional computed tomography (CT)-based modalities. The superior soft tissue contrast of magnetic resonance (MR) enables an improved visualization of the gross tumor and adjacent normal tissues in the treatment of abdominal and thoracic malignancies. Online adaptive capabilities, coupled with advanced motion management of real-time tracking of the tumor, directly allow for high-precision inter-/intrafraction localization. The primary aim of this case series is to describe MR-based interventions for localizing targets not well-visualized with conventional image-guided technologies. The abdominal and thoracic sites of the lung, kidney, liver, and gastric targets are described to illustrate the technological advancement of MR-guidance in radiotherapy. PMID:29872602
The seam visual tracking method for large structures
NASA Astrophysics Data System (ADS)
Bi, Qilin; Jiang, Xiaomin; Liu, Xiaoguang; Cheng, Taobo; Zhu, Yulong
2017-10-01
In this paper, a compact and flexible weld visual tracking method is proposed. Firstly, there was the interference between the visual device and the work-piece to be welded when visual tracking height cannot change. a kind of weld vision system with compact structure and tracking height is researched. Secondly, according to analyze the relative spatial pose between the camera, the laser and the work-piece to be welded and study with the theory of relative geometric imaging, The mathematical model between image feature parameters and three-dimensional trajectory of the assembly gap to be welded is established. Thirdly, the visual imaging parameters of line structured light are optimized by experiment of the weld structure of the weld. Fourth, the interference that line structure light will be scatters at the bright area of metal and the area of surface scratches will be bright is exited in the imaging. These disturbances seriously affect the computational efficiency. The algorithm based on the human eye visual attention mechanism is used to extract the weld characteristics efficiently and stably. Finally, in the experiment, It is verified that the compact and flexible weld tracking method has the tracking accuracy of 0.5mm in the tracking of large structural parts. It is a wide range of industrial application prospects.
Hout, Michael C.; Goldinger, Stephen D.
2011-01-01
When observers search for a target object, they incidentally learn the identities and locations of “background” objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays (Hout & Goldinger, 2010). Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory; Wolfe, 2007). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated non-target objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. PMID:21574743
Effects of vision on head-putter coordination in golf.
Gonzalez, David Antonio; Kegel, Stefan; Ishikura, Tadao; Lee, Tim
2012-07-01
Low-skill golfers coordinate the movements of their head and putter with an allocentric, isodirectional coupling, which is opposite to the allocentric, antidirectional coordination pattern used by experts (Lee, Ishikura, Kegel, Gonzalez, & Passmore, 2008). The present study investigated the effects of four vision conditions (full vision, no vision, target focus, and ball focus) on head-putter coupling in low-skill golfers. Performance in the absence of vision resulted in a level of high isodirectional coupling that was similar to the full vision condition. However, when instructed to focus on the target during the putt, or focus on the ball through a restricted viewing angle, low-skill golfers significantly decoupled the head--putter coordination pattern. However, outcome measures demonstrated that target focus resulted in poorer performance compared with the other visual conditions, thereby providing overall support for use of a ball focus strategy to enhance coordination and outcome performance. Focus of attention and reduced visual tracking were hypothesized as potential reasons for the decoupling.
Ebe, Kazuyu; Sugimoto, Satoru; Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi; Court, Laurence; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji
2015-08-01
To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio-caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient's tumor motion. A substitute target with the patient's tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors' QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients' tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.
Eye-Tracking in the Study of Visual Expertise: Methodology and Approaches in Medicine
ERIC Educational Resources Information Center
Fox, Sharon E.; Faulkner-Jones, Beverly E.
2017-01-01
Eye-tracking is the measurement of eye motions and point of gaze of a viewer. Advances in this technology have been essential to our understanding of many forms of visual learning, including the development of visual expertise. In recent years, these studies have been extended to the medical professions, where eye-tracking technology has helped us…
Fast Deep Tracking via Semi-Online Domain Adaptation
NASA Astrophysics Data System (ADS)
Li, Xiaoping; Luo, Wenbing; Zhu, Yi; Li, Hanxi; Wang, Mingwen
2018-04-01
Deep tracking has been illustrating overwhelming superiorities over the shallow methods. Unfortunately, it also suffers from low FPS rates. To alleviate the problem, a number of real-time deep trackers have been proposed via removing the online updating procedure on the CNN model. However, the absent of the online update leads to a significant drop on tracking accuracy. In this work, we propose to perform the domain adaptation for visual tracking in two stages for transferring the information from the visual tracking domain and the instance domain respectively. In this way, the proposed visual tracker achieves comparable tracking accuracy to the state-of-the-art trackers and runs at real-time speed on an average consuming GPU.
Autonomous Rover Traverse and Precise Arm Placement on Remotely Designated Targets
NASA Technical Reports Server (NTRS)
Nesnas, Issa A.; Pivtoraiko, Mihail N.; Kelly, Alonzo; Fleder, Michael
2012-01-01
This software controls a rover platform to traverse rocky terrain autonomously, plan paths, and avoid obstacles using its stereo hazard and navigation cameras. It does so while continuously tracking a target of interest selected from 10 20 m away. The rover drives and tracks the target until it reaches the vicinity of the target. The rover then positions itself to approach the target, deploys its robotic arm, and places the end effector instrument on the designated target to within 2-3-cm accuracy of the originally selected target. This software features continuous navigation in a fairly rocky field in an outdoor environment and the ability to enable the rover to avoid large rocks and traverse over smaller ones. Using point-and-click mouse commands, a scientist designates targets in the initial imagery acquired from the rover s mast cameras. The navigation software uses stereo imaging, traversability analysis, path planning, trajectory generation, and trajectory execution. It also includes visual target tracking of a designated target selected from 10 m away while continuously navigating the rocky terrain. Improvements in this design include steering while driving, which uses continuous curvature paths. There are also several improvements to the traversability analyzer, including improved data fusion of traversability maps that result from pose estimation uncertainties, dealing with boundary effects to enable tighter maneuvers, and handling a wider range of obstacles. This work advances what has been previously developed and integrated on the Mars Exploration Rovers by using algorithms that are capable of traversing more rock-dense terrains, enabling tight, thread-the-needle maneuvers. These algorithms were integrated on the newly refurbished Athena Mars research rover, and were fielded in the JPL Mars Yard. Forty-three runs were conducted with targets at distances ranging from 5 to 15 m, and a success rate of 93% was achieved for placement of the instrument within 2-3 cm of the target.
Lateralization in Alpha-Band Oscillations Predicts the Locus and Spatial Distribution of Attention.
Ikkai, Akiko; Dandekar, Sangita; Curtis, Clayton E
2016-01-01
Attending to a task-relevant location changes how neural activity oscillates in the alpha band (8-13Hz) in posterior visual cortical areas. However, a clear understanding of the relationships between top-down attention, changes in alpha oscillations in visual cortex, and attention performance are still poorly understood. Here, we tested the degree to which the posterior alpha power tracked the locus of attention, the distribution of attention, and how well the topography of alpha could predict the locus of attention. We recorded magnetoencephalographic (MEG) data while subjects performed an attention demanding visual discrimination task that dissociated the direction of attention from the direction of a saccade to indicate choice. On some trials, an endogenous cue predicted the target's location, while on others it contained no spatial information. When the target's location was cued, alpha power decreased in sensors over occipital cortex contralateral to the attended visual field. When the cue did not predict the target's location, alpha power again decreased in sensors over occipital cortex, but bilaterally, and increased in sensors over frontal cortex. Thus, the distribution and the topography of alpha reliably indicated the locus of covert attention. Together, these results suggest that alpha synchronization reflects changes in the excitability of populations of neurons whose receptive fields match the locus of attention. This is consistent with the hypothesis that alpha oscillations reflect the neural mechanisms by which top-down control of attention biases information processing and modulate the activity of neurons in visual cortex.
Code of Federal Regulations, 2011 CFR
2011-10-01
... devices for new on-track roadway maintenance machines. 214.509 Section 214.509 Transportation Other... TRANSPORTATION RAILROAD WORKPLACE SAFETY On-Track Roadway Maintenance Machines and Hi-Rail Vehicles § 214.509 Required visual illumination and reflective devices for new on-track roadway maintenance machines. Each new...
Arthofer, Wolfgang; Decristoforo, Clemens; Schlick-Steiner, Birgit C.; Steiner, Florian M.
2016-01-01
Knowledge of a species’ ecology, including its movement in time and space, is key for many questions in biology and conservation. While numerous tools for tracking larger animals are available, millimetre-sized insects are averse to standard tracking and labelling procedures. Here, we evaluated the applicability of ultra-low, permission-exempt activities of the metastable isomer of the radionuclide Technetium-99 for labelling and field detection of the mountain fly Drosophila nigrosparsa. We demonstrate that an activity of less than 10 MBq is sufficient to label dozens of flies and detect single individuals using standard radiation protection monitors. The methodology presented here is applicable to many small-sized, low-mobility animals as well as independent from light and weather conditions and visual contact with the target organism. PMID:27812000
A Bilateral Advantage for Storage in Visual Working Memory
ERIC Educational Resources Information Center
Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward
2010-01-01
Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…
Manelis, Anna; Reder, Lynne M
2012-10-16
Using a combination of eye tracking and fMRI in a contextual cueing task, we explored the mechanisms underlying the facilitation of visual search for repeated spatial configurations. When configurations of distractors were repeated, greater activation in the right hippocampus corresponded to greater reductions in the number of saccades to locate the target. A psychophysiological interactions analysis for repeated configurations revealed that a strong functional connectivity between this area in the right hippocampus and the left superior parietal lobule early in learning was significantly reduced toward the end of the task. Practice related changes (which we call "procedural learning") in activation in temporo-occipital and parietal brain regions depended on whether or not spatial context was repeated. We conclude that context repetition facilitates visual search through chunk formation that reduces the number of effective distractors that have to be processed during the search. Context repetition influences procedural learning in a way that allows for continuous and effective chunk updating.
Manelis, Anna; Reder, Lynne M.
2012-01-01
Using a combination of eye tracking and fMRI in a contextual cueing task, we explored the mechanisms underlying the facilitation of visual search for repeated spatial configurations. When configurations of distractors were repeated, greater activation in the right hippocampus corresponded to greater reductions in the number of saccades to locate the target. A psychophysiological interactions analysis for repeated configurations revealed that a strong functional connectivity between this area in the right hippocampus and the left superior parietal lobule early in learning was significantly reduced toward the end of the task. Practice related changes (which we call “procedural learning”) in activation in temporo-occipital and parietal brain regions depended on whether or not spatial context was repeated. We conclude that context repetition facilitates visual search through chunk formation that reduces the number of effective distractors that have to be processed during the search. Context repetition influences procedural learning in a way that allows for continuous and effective chunk updating. PMID:23073642
Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou
2017-01-01
The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs. PMID:28348530
Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou
2017-01-01
The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs.
Multi-modal information processing for visual workload relief
NASA Technical Reports Server (NTRS)
Burke, M. W.; Gilson, R. D.; Jagacinski, R. J.
1980-01-01
The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority.
Airborne target tracking algorithm against oppressive decoys in infrared imagery
NASA Astrophysics Data System (ADS)
Sun, Xiechang; Zhang, Tianxu
2009-10-01
This paper presents an approach for tracking airborne target against oppressive infrared decoys. Oppressive decoy lures infrared guided missile by its high infrared radiation. Traditional tracking algorithms have degraded stability even come to tracking failure when airborne target continuously throw out many decoys. The proposed approach first determines an adaptive tracking window. The center of the tracking window is set at a predicted target position which is computed based on uniform motion model. Different strategies are applied for determination of tracking window size according to target state. The image within tracking window is segmented and multi features of candidate targets are extracted. The most similar candidate target is associated to the tracking target by using a decision function, which calculates a weighted sum of normalized feature differences between two comparable targets. Integrated intensity ratio of association target and tracking target, and target centroid are examined to estimate target state in the presence of decoys. The tracking ability and robustness of proposed approach has been validated by processing available real-world and simulated infrared image sequences containing airborne targets and oppressive decoys.
Eye movement assessment of selective attentional capture by emotional pictures.
Nummenmaa, Lauri; Hyönä, Jukka; Calvo, Manuel G
2006-05-01
The eye-tracking method was used to assess attentional orienting to and engagement on emotional visual scenes. In Experiment 1, unpleasant, neutral, or pleasant target pictures were presented simultaneously with neutral control pictures in peripheral vision under instruction to compare pleasantness of the pictures. The probability of first fixating an emotional picture, and the frequency of subsequent fixations, were greater than those for neutral pictures. In Experiment 2, participants were instructed to avoid looking at the emotional pictures, but these were still more likely to be fixated first and gazed longer during the first-pass viewing than neutral pictures. Low-level visual features cannot explain the results. It is concluded that overt visual attention is captured by both unpleasant and pleasant emotional content. 2006 APA, all rights reserved
A helmet mounted display to adapt the telerobotic environment to human vision
NASA Technical Reports Server (NTRS)
Tharp, Gregory; Liu, Andrew; Yamashita, Hitomi; Stark, Lawrence
1990-01-01
A Helmet Mounted Display system has been developed. It provides the capability to display stereo images with the viewpoint tied to subjects' head orientation. The type of display might be useful in a telerobotic environment provided the correct operating parameters are known. The effects of update frequency were tested using a 3D tracking task. The effects of blur were tested using both tracking and pick-and-place tasks. For both, researchers found that operator performance can be degraded if the correct parameters are not used. Researchers are also using the display to explore the use of head movements as part of gaze as subjects search their visual field for target objects.
Some lessons learned in three years with ADS-33C. [rotorcraft handling qualities specification
NASA Technical Reports Server (NTRS)
Key, David L.; Blanken, Chris L.; Hoh, Roger H.
1993-01-01
Three years of using the U.S. Army's rotorcraft handling qualities specification, Aeronautical Design Standard - 33, has shown it to be surprisingly robust. It appears to provide an excellent basis for design and for assessment, however, as the subtleties become more well understood, several areas needing refinement became apparent. Three responses to these needs have been documented in this paper: (1) The yaw-axis attitude quickness for hover target acquisition and tracking can be relaxed slightly. (2) Understanding and application of criteria for degraded visual environments needed elaboration. This and some guidelines for testing to obtain visual cue ratings have been documented. (3) The flight test maneuvers were an innovation that turned out to be very valuable. Their extensive use has made it necessary to tighten definitions and testing guidance. This was accomplished for a good visual environment and is underway for degraded visual environments.
The functional performance of the Argus II retinal prosthesis
Stronks, H Christiaan; Dagnelie, Gislin
2014-01-01
Summary Visual prostheses are devices to treat profound vision loss by stimulating secondary nerve cells anywhere along the visual pathway, typically with electrical pulses. The Argus® II implant, developed by Second Sight Medical Products (SSMP, Sylmar, CA, USA), targets the retina and features 60 electrodes that electrically stimulate the surviving retinal neurons. Of the approximately 20 research groups that are actively developing visual prostheses, SSMP has the longest track record. The Argus II was the first visual prosthesis to become commercially available: It received the CE mark in Europe in 2011 and FDA approval was granted in early 2013 for humanitarian use in the USA. Meanwhile, the Argus II safety/benefit study has been extended for research purposes, and is ongoing. In this review we will discuss the performance of the Argus II in restoring sight to the blind, and we will shed light on its expected developments in the coming years. PMID:24308734
Boy, Nikolas; Heringer, Jana; Haege, Gisela; Glahn, Esther M; Hoffmann, Georg F; Garbade, Sven F; Kölker, Stefan; Burgard, Peter
2015-12-22
Glutaric aciduria type I (GA-I) is an inherited metabolic disease due to deficiency of glutaryl-CoA dehydrogenase (GCDH). Cognitive functions are generally thought to be spared, but have not yet been studied in detail. Thirty patients detected by newborn screening (n = 13), high-risk screening (n = 3) or targeted metabolic testing (n = 14) were studied for simple reaction time (SRT), continuous performance (CP), visual working memory (VWM), visual-motor coordination (Tracking) and visual search (VS). Dystonia (n = 13 patients) was categorized using the Barry-Albright-Dystonia Scale (BADS). Patients were compared with 196 healthy controls. Developmental functions of cognitive performances were analysed using a negative exponential function model. BADS scores correlated with speed tests but not with tests measuring stability or higher cognitive functions without time constraints. Developmental functions of GA-I patients significantly differed from controls for SRT and VS but not for VWM and showed obvious trends for CP and Tracking. Dystonic patients were slower in SRT and CP but reached their asymptote of performance similar to asymptomatic patients and controls in all tests. Asymptomatic patients did not differ from controls, except showing significantly better results in Tracking and a trend for slower reactions in visual search. Data across all age groups of patients and controls fitted well to a model of negative exponential development. Dystonic patients predominantly showed motor speed impairment, whereas performance improved with higher cognitive load. Patients without motor symptoms did not differ from controls. Developmental functions of cognitive performances were similar in patients and controls. Performance in tests with higher cognitive demand might be preserved in GA-I, even in patients with striatal degeneration.
Lansu, Tessa A M; Cillessen, Antonius H N; Karremans, Johan C
2014-01-01
Previous research has shown that adolescents' attention for a peer is determined by the peer's status. This study examined how it is also determined by the status of the perceiving adolescent, and the gender of both parties involved (perceiver and perceived). Participants were 122 early adolescents (M age = 11.0 years) who completed sociometric measures and eye-tracking recordings of visual fixations at pictures of high-status (popular) and low-status (unpopular) classmates. Automatic attention (first-gaze preference) and controlled attention (total gaze time) were measured. Target popularity was associated with both measures of attention. These associations were further moderated by perceiver popularity and perceiver and target gender. Popular adolescents attracted attention especially from other popular adolescents. Popular boys attracted attention especially from girls. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
NASA Astrophysics Data System (ADS)
Muggiolu, Giovanna; Pomorski, Michal; Claverie, Gérard; Berthet, Guillaume; Mer-Calfati, Christine; Saada, Samuel; Devès, Guillaume; Simon, Marina; Seznec, Hervé; Barberet, Philippe
2017-01-01
As well as being a significant source of environmental radiation exposure, α-particles are increasingly considered for use in targeted radiation therapy. A better understanding of α-particle induced damage at the DNA scale can be achieved by following their tracks in real-time in targeted living cells. Focused α-particle microbeams can facilitate this but, due to their low energy (up to a few MeV) and limited range, α-particles detection, delivery, and follow-up observations of radiation-induced damage remain difficult. In this study, we developed a thin Boron-doped Nano-Crystalline Diamond membrane that allows reliable single α-particles detection and single cell irradiation with negligible beam scattering. The radiation-induced responses of single 3 MeV α-particles delivered with focused microbeam are visualized in situ over thirty minutes after irradiation by the accumulation of the GFP-tagged RNF8 protein at DNA damaged sites.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.
The stage of priming: are intertrial repetition effects attentional or decisional?
Becker, Stefanie I
2008-02-01
In a visual search task, reaction times to a target are shorter when its features are repeated than when they switch. The present study investigated whether these priming effects affect the attentional stage of target selection, as proposed by the priming of pop-out account, or whether they modulate performance at a later, post-selectional stage, as claimed by the episodic retrieval view. Secondly, to test whether priming affects only the target-defining feature, or whether priming can apply to all target-features in a holistic fashion, two presentation conditions were invoked, that either promoted encoding of only the target-defining feature or holistic encoding of all target features. Results from four eye tracking experiments involving a size and colour singleton target showed that, first, priming modulates selectional processes concerned with guiding attention. Second, there were traces of holistic priming effects, which however were not modulated by the displays, but by expectation and task difficulty.
Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature
Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat
2014-01-01
It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185
Prey Capture Behavior Evoked by Simple Visual Stimuli in Larval Zebrafish
Bianco, Isaac H.; Kampff, Adam R.; Engert, Florian
2011-01-01
Understanding how the nervous system recognizes salient stimuli in the environment and selects and executes the appropriate behavioral responses is a fundamental question in systems neuroscience. To facilitate the neuroethological study of visually guided behavior in larval zebrafish, we developed “virtual reality” assays in which precisely controlled visual cues can be presented to larvae whilst their behavior is automatically monitored using machine vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼20°) toward small moving spots (1°) but reacted to larger spots (10°) with high-amplitude aversive turns (∼60°). The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analyzing movie sequences of larvae hunting paramecia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behavior in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey. PMID:22203793
[Visual–manual tracking after long spaceflight].
2016-01-01
This study presents the results of the pre- and postflight clinical and physiological examination (CPE) and scientific experiment “Sensory Adaptation-2” carried out in Yu.A. Gagarin Research & Test Cosmonaut Training Center. There were examined 14 Russian cosmonauts, crewmembers of long-term international spaceflights ISS-28/29 to ISS 36/37, who were in microgravity from 159 to 195 days. Age of the cosmonauts was 35–50 years. Studies were conducted twice before space flight (baseline), and on days R+1(2), R+4(5), and R+8(9) after landing. In the study of visual–manual tracking (VMT), eye movements were recorded by the electrooculography method (EOG), hand movements - by a joystick using biological visual feedback (on the screen represented the current angle/position of a joystick). Examinations were conducted using computerized stimulation programs, which were presented on the screen of the hardware-software complex "Sensomotor". Examinations of the VMT took place in the dialog mode and included the following sections: a) EOG-calibration; b) visual-manual tracking within ±10° on the screen with blank background (smooth linear and sinusoidal movement of a point target with a frequency of 0.16 Hz in the vertical and horizontal directions). There were evaluated time, amplitude, and velocity characteristics of visual and manual tracking (VT and MT), including the effectiveness (EC) and gain (GC) coefficients which were calculated respectively, as the ratio of amplitude and velocity of the visual stimulus (target). A study of the vestibular function (VF) was performed before and after space flight using videooculography. There were assessed static torsion otolith–cervical–ocular reflex, dynamic vestibular–cervical–ocular reactions, vestibular reactivity, spontaneous eye movements. Study of VF in the first postflight has shown a sharp decrease (up to its complete absence) of static vestibular excitability (otolith reflex) accompanied by the increased dynamic reactivity of the vestibular system. Study of VTM has shown a significant decrease of gain and effectiveness/amplitud of VT in the first days postflight, as well as correlation between the parameters of VF and MT, between the VF and VT, and no found correlation between parameters of VF and MT. It was found that the conditions of space flight (microgravity) have a greater impact on the accuracy of the VT than the accuracy of MT. Full return of characteristics of the VMT and VF to the baseline was observed on R+8(9) days after space flight.
Top-down influences on visual attention during listening are modulated by observer sex.
Shen, John; Itti, Laurent
2012-07-15
In conversation, women have a small advantage in decoding non-verbal communication compared to men. In light of these findings, we sought to determine whether sex differences also existed in visual attention during a related listening task, and if so, if the differences existed among attention to high-level aspects of the scene or to conspicuous visual features. Using eye-tracking and computational techniques, we present direct evidence that men and women orient attention differently during conversational listening. We tracked the eyes of 15 men and 19 women who watched and listened to 84 clips featuring 12 different speakers in various outdoor settings. At the fixation following each saccadic eye movement, we analyzed the type of object that was fixated. Men gazed more often at the mouth and women at the eyes of the speaker. Women more often exhibited "distracted" saccades directed away from the speaker and towards a background scene element. Examining the multi-scale center-surround variation in low-level visual features (static: color, intensity, orientation, and dynamic: motion energy), we found that men consistently selected regions which expressed more variation in dynamic features, which can be attributed to a male preference for motion and a female preference for areas that may contain nonverbal information about the speaker. In sum, significant differences were observed, which we speculate arise from different integration strategies of visual cues in selecting the final target of attention. Our findings have implications for studies of sex in nonverbal communication, as well as for more predictive models of visual attention. Published by Elsevier Ltd.
Virtual target tracking (VTT) as applied to mobile satellite communication networks
NASA Astrophysics Data System (ADS)
Amoozegar, Farid
1999-08-01
Traditionally, target tracking has been used for aerospace applications, such as, tracking highly maneuvering targets in a cluttered environment for missile-to-target intercept scenarios. Although the speed and maneuvering capability of current aerospace targets demand more efficient algorithms, many complex techniques have already been proposed in the literature, which primarily cover the defense applications of tracking methods. On the other hand, the rapid growth of Global Communication Systems, Global Information Systems (GIS), and Global Positioning Systems (GPS) is creating new and more diverse challenges for multi-target tracking applications. Mobile communication and computing can very well appreciate a huge market for Cellular Communication and Tracking Devices (CCTD), which will be tracking networked devices at the cellular level. The objective of this paper is to introduce a new concept, i.e., Virtual Target Tracking (VTT) for commercial applications of multi-target tracking algorithms and techniques as applied to mobile satellite communication networks. It would be discussed how Virtual Target Tracking would bring more diversity to target tracking research.
Multisensory brand search: How the meaning of sounds guides consumers' visual attention.
Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles
2016-06-01
Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2011-02-01
The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.
Annotated Bibliography of Reports: Supplement No. 7, 1 July 1974 - 30 June 1975,
1975-06-30
studies have shown that alcohol interferes with visual control of vestibular nystagmus . The present study was designed to assess three partially inde...suppression of vestibular nystagmus ; a second involved smooth oculomotor tracking of a moving target; and a third required repetitive rapid voluntary shifts in... gaze . Oculomotor control was degraded on the first two tasks with recovery toward the initial performance level 4 hours after drinking. Performance on
Human image tracking technique applied to remote collaborative environments
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Suzuki, Gen
1993-10-01
To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.
A magnetic tether system to investigate visual and olfactory mediated flight control in Drosophila.
Duistermars, Brian J; Frye, Mark
2008-11-21
It has been clear for many years that insects use visual cues to stabilize their heading in a wind stream. Many animals track odors carried in the wind. As such, visual stabilization of upwind tracking directly aids in odor tracking. But do olfactory signals directly influence visual tracking behavior independently from wind cues? Also, the recent deluge of research on the neurophysiology and neurobehavioral genetics of olfaction in Drosophila has motivated ever more technically sophisticated and quantitative behavioral assays. Here, we modified a magnetic tether system originally devised for vision experiments by equipping the arena with narrow laminar flow odor plumes. A fly is glued to a small steel pin and suspended in a magnetic field that enables it to yaw freely. Small diameter food odor plumes are directed downward over the fly's head, eliciting stable tracking by a hungry fly. Here we focus on the critical mechanics of tethering, aligning the magnets, devising the odor plume, and confirming stable odor tracking.
Tiadi, Aimé; Seassau, Magali; Gerard, Christophe-Loïc; Bucci, Maria Pia
2016-01-01
The object of this study was to explore further phonological visual-auditory recognition tasks in a group of fifty-six healthy children (mean age: 9.9 ± 0.3) and to compare these data to those recorded in twenty-six age-matched dyslexic children (mean age: 9.8 ± 0.2). Eye movements from both eyes were recorded using an infrared video-oculography system (MobileEBT® e(y)e BRAIN). The recognition task was performed under four conditions in which the target object was displayed either with phonologically unrelated objects (baseline condition), or with cohort or rhyme objects (cohort and rhyme conditions, respectively), or both together (rhyme + cohort condition). The percentage of the total time spent on the targets and the latency of the first saccade on the target were measured. Results in healthy children showed that the percentage of the total time spent in the baseline condition was significantly longer than in the other conditions, and that the latency of the first saccade in the cohort condition was significantly longer than in the other conditions; interestingly, the latency decreased significantly with the increasing age of the children. The developmental trend of phonological awareness was also observed in healthy children only. In contrast, we observed that for dyslexic children the total time spent on the target was similar in all four conditions tested, and also that they had similar latency values in both cohort and rhyme conditions. These findings suggest a different sensitivity to the phonological competitors between dyslexic and non-dyslexic children. Also, the eye-tracking technique provides online information about phonological awareness capabilities in children. PMID:27438352
Han, Xuemei; Smyth, Regan L; Young, Bruce E; Brooks, Thomas M; Sánchez de Lozada, Alexandra; Bubb, Philip; Butchart, Stuart H M; Larsen, Frank W; Hamilton, Healy; Hansen, Matthew C; Turner, Will R
2014-01-01
Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's "Aichi Targets". These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong) and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity "dashboard"--a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate), state of species (Red List Index), conservation response (protection of key biodiversity areas), and benefits to human populations (freshwater provision). Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the protection of natural resources.
Hout, Michael C; Goldinger, Stephen D
2012-02-01
When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent.
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
Execution of saccadic eye movements affects speed perception
Goettker, Alexander; Braun, Doris I.; Schütz, Alexander C.; Gegenfurtner, Karl R.
2018-01-01
Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades. PMID:29440494
de Senneville, Baudouin Denis; Mougenot, Charles; Moonen, Chrit T W
2007-02-01
Focused ultrasound (US) is a unique and noninvasive technique for local deposition of thermal energy deep inside the body. MRI guidance offers the additional benefits of excellent target visualization and continuous temperature mapping. However, treating a moving target poses severe problems because 1) motion-related thermometry artifacts must be corrected, 2) the US focal point must be relocated according to the target displacement. In this paper a complete MRI-compatible, high-intensity focused US (HIFU) system is described together with adaptive methods that allow continuous MR thermometry and therapeutic US with real-time tracking of a moving target, online motion correction of the thermometry maps, and regional temperature control based on the proportional, integral, and derivative method. The hardware is based on a 256-element phased-array transducer with rapid electronic displacement of the focal point. The exact location of the target during US firing is anticipated using automatic analysis of periodic motions. The methods were tested with moving phantoms undergoing either rigid body or elastic periodical motions. The results show accurate tracking of the focal point. Focal and regional temperature control is demonstrated with a performance similar to that obtained with stationary phantoms. Copyright (c) 2007 Wiley-Liss, Inc.
Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD
Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne
2014-01-01
We recorded visual event-related brain potentials (ERPs) from 32 adult male participants (16 high-functioning participants diagnosed with Autism Spectrum Disorder (ASD) and 16 control participants, ranging in age from 18–53 yrs) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability conditions, whereas the probability of a third non-target stimulus was held constant in all conditions. P3 amplitude to target stimuli was more sensitive to probability in ASD than in TD participants, whereas P3 amplitude to non-target stimuli was less responsive to probability in ASD participants. This suggests that neural responses to changes in event probability are attention-dependant in high-functioning ASD. The implications of these findings for higher-level behaviors such as prediction and planning are discussed. PMID:24488156
Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.
2014-01-01
The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267
Discriminative object tracking via sparse representation and online dictionary learning.
Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua
2014-04-01
We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.
Tracking moving identities: after attending the right location, the identity does not come for free.
Pinto, Yaïr; Scholte, H Steven; Lamme, V A F
2012-01-01
Although tracking identical moving objects has been studied since the 1980's, only recently the study into tracking moving objects with distinct identities has started (referred to as Multiple Identity Tracking, MIT). So far, only behavioral studies into MIT have been undertaken. These studies have left a fundamental question regarding MIT unanswered, is MIT a one-stage or a two-stage process? According to the one-stage model, after a location has been attended, the identity is released without effort. However, according to the two-stage model, there are two effortful stages in MIT, attending to a location, and attending to the identity of the object at that location. In the current study we investigated this question by measuring brain activity in response to tracking familiar and unfamiliar targets. Familiarity is known to automate effortful processes, so if attention to identify the object is needed, this should become easier. However, if no such attention is needed, familiarity can only affect other processes (such as memory for the target set). Our results revealed that on unfamiliar trials neural activity was higher in both attentional networks, and visual identification networks. These results suggest that familiarity in MIT automates attentional identification processes, thus suggesting that attentional identification is needed in MIT. This then would imply that MIT is essentially a two-stage process, since after attending the location, the identity does not seem to come for free.
Quantifying Pilot Visual Attention in Low Visibility Terminal Operations
NASA Technical Reports Server (NTRS)
Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.
2012-01-01
Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation
Combined Feature Based and Shape Based Visual Tracker for Robot Navigation
NASA Technical Reports Server (NTRS)
Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.
2005-01-01
We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.
Contrast, contours and the confusion effect in dazzle camouflage.
Hogan, Benedict G; Scott-Samuel, Nicholas E; Cuthill, Innes C
2016-07-01
'Motion dazzle camouflage' is the name for the putative effects of highly conspicuous, often repetitive or complex, patterns on parameters important in prey capture, such as the perception of speed, direction and identity. Research into motion dazzle camouflage is increasing our understanding of the interactions between visual tracking, the confusion effect and defensive coloration. However, there is a paucity of research into the effects of contrast on motion dazzle camouflage: is maximal contrast a prerequisite for effectiveness? If not, this has important implications for our recognition of the phenotype and understanding of the function and mechanisms of potential motion dazzle camouflage patterns. Here we tested human participants' ability to track one moving target among many identical distractors with surface patterns designed to test the influence of these factors. In line with previous evidence, we found that targets with stripes parallel to the object direction of motion were hardest to track. However, reduction in contrast did not significantly influence this result. This finding may bring into question the utility of current definitions of motion dazzle camouflage, and means that some animal patterns, such as aposematic or mimetic stripes, may have previously unrecognized multiple functions.
Pupillary correlates of covert shifts of attention during working memory maintenance.
Unsworth, Nash; Robison, Matthew K
2017-04-01
The pupillary light reflex (PLR) was used to track covert shifts of attention to items maintained in visual working memory (VWM). In three experiments, participants performed a change detection task in which rectangles appeared on either side of fixation and at test participants indicated if the cued rectangle changed its orientation. Prior to presentation or during the delay, participants were cued to the light or dark side of the screen. When cued to the light side, the pupil constricted, and when cued to the dark side, the pupil dilated, suggesting that the PLR tracked covert shifts of attention. Similar covert shifts of attention were seen when the target stimuli remained onscreen and during a blank delay period, suggesting similar effects for attention to perceptual stimuli and attention to stimuli maintained in VWM. Furthermore, similar effects were demonstrated when participants were pre-cued or retro-cued to the prioritized location, suggesting that shifts of covert attention can occur both before and after target presentation. These results are consistent with prior research, suggesting an important role of covert shifts of attention during VWM maintenance and that the PLR can be used to track these covert shifts of attention.
A Reliable and Real-Time Tracking Method with Color Distribution
Zhao, Zishu; Han, Yuqi; Xu, Tingfa; Li, Xiangmin; Song, Haiping; Luo, Jiqiang
2017-01-01
Occlusion is a challenging problem in visual tracking. Therefore, in recent years, many trackers have been explored to solve this problem, but most of them cannot track the target in real time because of the heavy computational cost. A spatio-temporal context (STC) tracker was proposed to accelerate the task by calculating context information in the Fourier domain, alleviating the performance in handling occlusion. In this paper, we take advantage of the high efficiency of the STC tracker and employ salient prior model information based on color distribution to improve the robustness. Furthermore, we exploit a scale pyramid for accurate scale estimation. In particular, a new high-confidence update strategy and a re-searching mechanism are used to avoid the model corruption and handle occlusion. Extensive experimental results demonstrate our algorithm outperforms several state-of-the-art algorithms on the OTB2015 dataset. PMID:28994748
Samosky, Joseph T; Allen, Pete; Boronyak, Steve; Branstetter, Barton; Hein, Steven; Juhas, Mark; Nelson, Douglas A; Orebaugh, Steven; Pinto, Rohan; Smelko, Adam; Thompson, Mitch; Weaver, Robert A
2011-01-01
We are developing a simulator of peripheral nerve block utilizing a mixed-reality approach: the combination of a physical model, an MRI-derived virtual model, mechatronics and spatial tracking. Our design uses tangible (physical) interfaces to simulate surface anatomy, haptic feedback during needle insertion, mechatronic display of muscle twitch corresponding to the specific nerve stimulated, and visual and haptic feedback for the injection syringe. The twitch response is calculated incorporating the sensed output of a real neurostimulator. The virtual model is isomorphic with the physical model and is derived from segmented MRI data. This model provides the subsurface anatomy and, combined with electromagnetic tracking of a sham ultrasound probe and a standard nerve block needle, supports simulated ultrasound display and measurement of needle location and proximity to nerves and vessels. The needle tracking and virtual model also support objective performance metrics of needle targeting technique.
Updating visual memory across eye movements for ocular and arm motor control.
Thompson, Aidan A; Henriques, Denise Y P
2008-11-01
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Todorova, Biliana; Salabert, Nina; Tricot, Sabine; Boisgard, Raphaël; Rathaux, Mélanie; Le Grand, Roger; Chapon, Catherine
2017-01-01
We developed a new approach to visualize skin Langerhans cells by in vivo fluorescence imaging in nonhuman primates. Macaques were intradermally injected with a monoclonal, fluorescently labeled antibody against HLA-DR molecule and were imaged for up to 5 days by fibered confocal microscopy (FCFM). The network of skin Langerhans cells was visualized by in vivo fibered confocal fluorescence microscopy. Quantification of Langerhans cells revealed no changes to cell density with time. Ex vivo experiments confirmed that injected fluorescent HLA-DR antibody specifically targeted Langerhans cells in the epidermis. This study demonstrates the feasibility of single-cell, in vivo imaging as a noninvasive technique to track Langerhans cells in nontransgenic animals.
A Track Initiation Method for the Underwater Target Tracking Environment
NASA Astrophysics Data System (ADS)
Li, Dong-dong; Lin, Yang; Zhang, Yao
2018-04-01
A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.
Parallel computation of level set method for 500 Hz visual servo control
NASA Astrophysics Data System (ADS)
Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi
2008-11-01
We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.
Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.
Durand, Karine; Baudouin, Jean-Yves; Lewkowicz, David J; Goubet, Nathalie; Schaal, Benoist
2013-01-01
This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.
Target tracking system based on preliminary and precise two-stage compound cameras
NASA Astrophysics Data System (ADS)
Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie
2018-02-01
Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.
Yuan, Jianchao; Zhang, Haiyuan; Kaur, Harpreet; Oupicky, David; Peng, Fangyu
2013-05-01
Poly(HPMA)-c(RGDyK)-DOTA-64Cu copolymers were synthesized and characterized for tumor localization in vivo as a theranostic scaffold for cancer imaging and anticancer drug delivery targeting tumor angiogenesis. Tumor localization of the poly(HPMA)-c(RGDyK)-DOTA-64Cu copolymers was visualized in mice bearing human prostate cancer xenografts by positron emission tomography (PET) using a microPET scanner. PET quantitative analysis demonstrated that tumor 64Cu radioactivity (2.75 ± 0.34 %ID/g) in tumor-bearing mice 3 hours following intravenous injection of the poly(HPMA)-c(RGDyK)-DOTA-64Cu copolymers was significantly higher than the tumor 64Cu radioactivity (1.29 ± 0.26 %ID/g) in tumor-bearing mice injected with the nontargeted poly(HPMA)-DOTA-64Cu copolymers (p = .004). The poly(HPMA)-c(RGDyK)-DOTA-64Cu copolymers hold potential as a theranostic scaffold for cancer imaging and radiochemotherapy of prostate cancer targeting tumor angiogenesis by noninvasive tracking with PET.
Roll tracking effects of G-vector tilt and various types of motion washout
NASA Technical Reports Server (NTRS)
Jex, H. R.; Magdaleno, R. E.; Junker, A. M.
1978-01-01
In a dogfight scenario, the task was to follow the target's roll angle while suppressing gust disturbances. All subjects adopted the same behavioral strategies in following the target while suppressing the gusts, and the MFP-fitted math model response was generally within one data symbol width. The results include the following: (1) comparisons of full roll motion (both with and without the spurious gravity tilt cue) with the static case. These motion cues help suppress disturbances with little net effect on the visual performance. Tilt cues were clearly used by the pilots but gave only small improvement in tracking errors. (2) The optimum washout (in terms of performance close to real world, similar behavioral parameters, significant motion attenuation (60 percent), and acceptable motion fidelity) was the combined attenuation and first-order washout. (3) Various trends in parameters across the motion conditions were apparent, and are discussed with respect to a comprehensive model for predicting adaptation to various roll motion cues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta
Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on themore » target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). Conclusions: The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.« less
A difference tracking algorithm based on discrete sine transform
NASA Astrophysics Data System (ADS)
Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun
2018-04-01
Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Studying visual attention using the multiple object tracking paradigm: A tutorial review.
Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus
2017-07-01
Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.
Before your very eyes: the value and limitations of eye tracking in medical education.
Kok, Ellen M; Jarodzka, Halszka
2017-01-01
Medicine is a highly visual discipline. Physicians from many specialties constantly use visual information in diagnosis and treatment. However, they are often unable to explain how they use this information. Consequently, it is unclear how to train medical students in this visual processing. Eye tracking is a research technique that may offer answers to these open questions, as it enables researchers to investigate such visual processes directly by measuring eye movements. This may help researchers understand the processes that support or hinder a particular learning outcome. In this article, we clarify the value and limitations of eye tracking for medical education researchers. For example, eye tracking can clarify how experience with medical images mediates diagnostic performance and how students engage with learning materials. Furthermore, eye tracking can also be used directly for training purposes by displaying eye movements of experts in medical images. Eye movements reflect cognitive processes, but cognitive processes cannot be directly inferred from eye-tracking data. In order to interpret eye-tracking data properly, theoretical models must always be the basis for designing experiments as well as for analysing and interpreting eye-tracking data. The interpretation of eye-tracking data is further supported by sound experimental design and methodological triangulation. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
ERIC Educational Resources Information Center
Geri, George A.; Hubbard, David C.
Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…
van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J
2017-08-01
Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong; Hsiung, Pao-Ann; Wan, Chieh-Hao; Koong, Chorng-Shiuh; Liu, Tang-Kun; Yang, Yuanfan; Lin, Chu-Hsing; Chu, William Cheng-Chung
2009-02-01
A billiard ball tracking system is designed to combine with a visual guide interface to instruct users for a reliable strike. The integrated system runs on a PC platform. The system makes use of a vision system for cue ball, object ball and cue stick tracking. A least-squares error calibration process correlates the real-world and the virtual-world pool ball coordinates for a precise guidance line calculation. Users are able to adjust the cue stick on the pool table according to a visual guidance line instruction displayed on a PC monitor. The ideal visual guidance line extended from the cue ball is calculated based on a collision motion analysis. In addition to calculating the ideal visual guide, the factors influencing selection of the best shot among different object balls and pockets are explored. It is found that a tolerance angle around the ideal line for the object ball to roll into a pocket determines the difficulty of a strike. This angle depends in turn on the distance from the pocket to the object, the distance from the object to the cue ball, and the angle between these two vectors. Simulation results for tolerance angles as a function of these quantities are given. A selected object ball was tested extensively with respect to various geometrical parameters with and without using our integrated system. Players with different proficiency levels were selected for the experiment. The results indicate that all players benefit from our proposed visual guidance system in enhancing their skills, while low-skill players show the maximum enhancement in skill with the help of our system. All exhibit enhanced maximum and average hit-in rates. Experimental results on hit-in rates have shown a pattern consistent with that of the analysis. The hit-in rate is thus tightly connected with the analyzed tolerance angles for sinking object balls into a target pocket. These results prove the efficiency of our system, and the analysis results can be used to attain an efficient game-playing strategy.
A new terminal guidance sensor system for asteroid intercept or rendezvous missions
NASA Astrophysics Data System (ADS)
Lyzhoft, Joshua; Basart, John; Wie, Bong
2016-02-01
This paper presents the initial conceptual study results of a new terminal guidance sensor system for asteroid intercept or rendezvous missions, which explores the use of visual, infrared, and radar devices. As was demonstrated by NASA's Deep Impact mission, visual cameras can be effectively utilized for hypervelocity intercept terminal guidance for a 5 kilometer target. Other systems such as Raytheon's EKV (Exoatmospheric Kill Vehicle) employ a different scheme that utilizes infrared target information to intercept ballistic missiles. Another example that uses infrared information is the NEOWISE telescope, which is used for asteroid detection and tracking. This paper describes the signal-to-noise ratio estimation problem for infrared sensors, minimum and maximum range of detection, and computational validation using GPU accelerated simulations. Small targets (50-100 m in diameter) are considered, and scaled polyhedron models of known objects, such as the Rosetta mission's Comet 67P/Churyumov-Gerasimenko, 101,955 Bennu, target of the OSIRIS-REx mission, and asteroid 433 Eros, are utilized. A parallelized ray tracing algorithm to simulate realistic surface-to-surface shadowing of a given celestial body is developed. By using the simulated models and parameters given from the formulation of the different sensors, impact mission scenarios are used to verify the feasibility for intercepting a small target.
A Novel Performance Evaluation Methodology for Single-Target Trackers.
Kristan, Matej; Matas, Jiri; Leonardis, Ales; Vojir, Tomas; Pflugfelder, Roman; Fernandez, Gustavo; Nebehay, Georg; Porikli, Fatih; Cehovin, Luka
2016-11-01
This paper addresses the problem of single-target tracker performance evaluation. We consider the performance measures, the dataset and the evaluation system to be the most important components of tracker evaluation and propose requirements for each of them. The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison. The ranking-based methodology addresses tracker equivalence in terms of statistical significance and practical differences. A fully-annotated dataset with per-frame annotations with several visual attributes is introduced. The diversity of its visual properties is maximized in a novel way by clustering a large number of videos according to their visual attributes. This makes it the most sophistically constructed and annotated dataset to date. A multi-platform evaluation system allowing easy integration of third-party trackers is presented as well. The proposed evaluation methodology was tested on the VOT2014 challenge on the new dataset and 38 trackers, making it the largest benchmark to date. Most of the tested trackers are indeed state-of-the-art since they outperform the standard baselines, resulting in a highly-challenging benchmark. An exhaustive analysis of the dataset from the perspective of tracking difficulty is carried out. To facilitate tracker comparison a new performance visualization technique is proposed.
Lukasczyk, Jonas; Weber, Gunther; Maciejewski, Ross; ...
2017-06-01
Tracking graphs are a well established tool in topological analysis to visualize the evolution of components and their properties over time, i.e., when components appear, disappear, merge, and split. However, tracking graphs are limited to a single level threshold and the graphs may vary substantially even under small changes to the threshold. To examine the evolution of features for varying levels, users have to compare multiple tracking graphs without a direct visual link between them. We propose a novel, interactive, nested graph visualization based on the fact that the tracked superlevel set components for different levels are related to eachmore » other through their nesting hierarchy. This approach allows us to set multiple tracking graphs in context to each other and enables users to effectively follow the evolution of components for different levels simultaneously. We show the effectiveness of our approach on datasets from finite pointset methods, computational fluid dynamics, and cosmology simulations.« less
Emerging applications of eye-tracking technology in dermatology.
John, Kevin K; Jensen, Jakob D; King, Andy J; Pokharel, Manusheela; Grossman, Douglas
2018-04-06
Eye-tracking technology has been used within a multitude of disciplines to provide data linking eye movements to visual processing of various stimuli (i.e., x-rays, situational positioning, printed information, and warnings). Despite the benefits provided by eye-tracking in allowing for the identification and quantification of visual attention, the discipline of dermatology has yet to see broad application of the technology. Notwithstanding dermatologists' heavy reliance upon visual patterns and cues to discriminate between benign and atypical nevi, literature that applies eye-tracking to the study of dermatology is sparse; and literature specific to patient-initiated behaviors, such as skin self-examination (SSE), is largely non-existent. The current article provides a review of eye-tracking research in various medical fields, culminating in a discussion of current applications and advantages of eye-tracking for dermatology research. Copyright © 2018 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.
Development of the Code RITRACKS
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cucinotta, Francis A.
2013-01-01
A document discusses the code RITRACKS (Relativistic Ion Tracks), which was developed to simulate heavy ion track structure at the microscopic and nanoscopic scales. It is a Monte-Carlo code that simulates the production of radiolytic species in water, event-by-event, and which may be used to simulate tracks and also to calculate dose in targets and voxels of different sizes. The dose deposited by the radiation can be calculated in nanovolumes (voxels). RITRACKS allows simulation of radiation tracks without the need of extensive knowledge of computer programming or Monte-Carlo simulations. It is installed as a regular application on Windows systems. The main input parameters entered by the user are the type and energy of the ion, the length and size of the irradiated volume, the number of ions impacting the volume, and the number of histories. The simulation can be started after the input parameters are entered in the GUI. The number of each kind of interactions for each track is shown in the result details window. The tracks can be visualized in 3D after the simulation is complete. It is also possible to see the time evolution of the tracks and zoom on specific parts of the tracks. The software RITRACKS can be very useful for radiation scientists to investigate various problems in the fields of radiation physics, radiation chemistry, and radiation biology. For example, it can be used to simulate electron ejection experiments (radiation physics).
Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis
ERIC Educational Resources Information Center
Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying
2012-01-01
This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…
Frequency encoded auditory display of the critical tracking task
NASA Technical Reports Server (NTRS)
Stevenson, J.
1984-01-01
The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.
Chanel, Laure-Anais; Nageotte, Florent; Vappou, Jonathan; Luo, Jianwen; Cuvillon, Loic; de Mathelin, Michel
2015-01-01
High Intensity Focused Ultrasound (HIFU) therapy is a very promising method for ablation of solid tumors. However, intra-abdominal organ motion, principally due to breathing, is a substantial limitation that results in incorrect tumor targeting. The objective of this work is to develop an all-in-one robotized HIFU system that can compensate motion in real-time during HIFU treatment. To this end, an ultrasound visual servoing scheme working at 20 Hz was designed. It relies on the motion estimation by using a fast ultrasonic speckle tracking algorithm and on the use of an interleaved imaging/HIFU sonication sequence for avoiding ultrasonic wave interferences. The robotized HIFU system was tested on a sample of chicken breast undergoing a vertical sinusoidal motion at 0.25 Hz. Sonications with and without motion compensation were performed in order to assess the effect of motion compensation on thermal lesions induced by HIFU. Motion was reduced by more than 80% thanks to this ultrasonic visual servoing system.
System identification and sensorimotor determinants of flight maneuvers in an insect
NASA Astrophysics Data System (ADS)
Sponberg, Simon; Hall, Robert; Roth, Eatai
Locomotor maneuvers are inherently closed-loop processes. They are generally characterized by the integration of multiple sensory inputs and adaptation or learning over time. To probe sensorimotor processing we take a system identification approach treating the underlying physiological systems as dynamic processes and altering the feedback topology in experiment and analysis. As a model system, we use agile hawk moths (Manduca sexta), which feed from real and robotic flowers while hovering in mid air. Moths rely on vision and mechanosensation to track floral targets and can do so at exceptionally low luminance levels despite hovering being a mechanically unstable behavior that requires neural feedback to stabilize. By altering the sensory environment and placing mechanical and visual signals in conflict we show a surprisingly simple linear summation of visual and mechanosensation produces a generative prediction of behavior to novel stimuli. Tracking performance is also limited more by the mechanics of flight than the magnitude of the sensory cue. A feedback systems approach to locomotor control results in new insights into how behavior emerges from the interaction of nonlinear physiological systems.
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Brown, A.; Brown, J.
The paper describes the development and evaluation of a suite of advanced algorithms which provide significantly-improved capabilities for finding, fixing, and tracking multiple ballistic and flying low observable objects in highly stressing cluttered environments. The algorithms have been developed for use in satellite-based staring and scanning optical surveillance suites for applications including theatre and intercontinental ballistic missile early warning, trajectory prediction, and multi-sensor track handoff for midcourse discrimination and intercept. The functions performed by the algorithms include electronic sensor motion compensation providing sub-pixel stabilization (to 1/100 of a pixel), as well as advanced temporal-spatial clutter estimation and suppression to below sensor noise levels, followed by statistical background modeling and Bayesian multiple-target track-before-detect filtering. The multiple-target tracking is performed in physical world coordinates to allow for multi-sensor fusion, trajectory prediction, and intercept. Output of detected object cues and data visualization are also provided. The algorithms are designed to handle a wide variety of real-world challenges. Imaged scenes may be highly complex and infinitely varied -- the scene background may contain significant celestial, earth limb, or terrestrial clutter. For example, when viewing combined earth limb and terrestrial scenes, a combination of stationary and non-stationary clutter may be present, including cloud formations, varying atmospheric transmittance and reflectance of sunlight and other celestial light sources, aurora, glint off sea surfaces, and varied natural and man-made terrain features. The targets of interest may also appear to be dim, relative to the scene background, rendering much of the existing deployed software useless for optical target detection and tracking. Additionally, it may be necessary to detect and track a large number of objects in the threat cloud, and these objects may not always be resolvable in individual data frames. In the present paper, the performance of the developed algorithms is demonstrated using real-world data containing resident space objects observed from the MSX platform, with backgrounds varying from celestial to combined celestial and earth limb, with instances of extremely bright aurora clutter. Simulation results are also presented for parameterized variations in signal-to-clutter levels (down to 1/1000) and signal-to-noise levels (down to 1/6) for simulated targets against real-world terrestrial clutter backgrounds. We also discuss algorithm processing requirements and C++ software processing capabilities from our on-going MDA- and AFRL-sponsored development of an image processing toolkit (iPTK). In the current effort, the iPTK is being developed to a Technology Readiness Level (TRL) of 6 by mid-2010, in preparation for possible integration with STSS-like, SBIRS high-like and SBSS-like surveillance suites.
Storyline Visualizations of Eye Tracking of Movie Viewing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balint, John T.; Arendt, Dustin L.; Blaha, Leslie M.
Storyline visualizations offer an approach that promises to capture the spatio-temporal characteristics of individual observers and simultaneously illustrate emerging group behaviors. We develop a visual analytics approach to parsing, aligning, and clustering fixation sequences from eye tracking data. Visualization of the results captures the similarities and differences across a group of observers performing a common task. We apply our storyline approach to visualize gaze patterns of people watching dynamic movie clips. Storylines mitigate some of the shortcomings of existent spatio-temporal visualization techniques and, importantly, continue to highlight individual observer behavioral dynamics.
Secondary visual workload capability with primary visual and kinesthetic-tactual displays
NASA Technical Reports Server (NTRS)
Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.
1978-01-01
Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.
2017-03-01
Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Tracking a convoy of multiple targets using acoustic sensor data
NASA Astrophysics Data System (ADS)
Damarla, T. R.
2003-08-01
In this paper we present an algorithm to track a convoy of several targets in a scene using acoustic sensor array data. The tracking algorithm is based on template of the direction of arrival (DOA) angles for the leading target. Often the first target is the closest target to the sensor array and hence the loudest with good signal to noise ratio. Several steps were used to generate a template of the DOA angle for the leading target, namely, (a) the angle at the present instant should be close to the angle at the previous instant and (b) the angle at the present instant should be within error bounds of the predicted value based on the previous values. Once the template of the DOA angles of the leading target is developed, it is used to predict the DOA angle tracks of the remaining targets. In order to generate the tracks for the remaining targets, a track is established if the angles correspond to the initial track values of the first target. Second the time delay between the first track and the remaining tracks are estimated at the highest correlation points between the first track and the remaining tracks. As the vehicles move at different speeds the tracks either compress or expand depending on whether a target is moving fast or slow compared to the first target. The expansion and compression ratios are estimated and used to estimate the predicted DOA angle values of the remaining targets. Based on these predicted DOA angles of the remaining targets the DOA angles obtained from the MVDR or Incoherent MUSIC will be appropriately assigned to proper tracks. Several other rules were developed to avoid mixing the tracks. The algorithm is tested on data collected at Aberdeen Proving Ground with a convoy of 3, 4 and 5 vehicles. Some of the vehicles are tracked and some are wheeled vehicles. The tracking algorithm results are found to be good. The results will be presented at the conference and in the paper.
CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.
Bray, Mark-Anthony; Carpenter, Anne E
2015-11-04
Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.
Saenz, Daniel L.; Yan, Yue; Christensen, Neil; Henzler, Margaret A.; Forrest, Lisa J.; Bayouth, John E.
2015-01-01
ViewRay is a novel MR‐guided radiotherapy system capable of imaging in near real‐time at four frames per second during treatment using 0.35T field strength. It allows for improved gating techniques and adaptive radiotherapy. Three cobalt‐60 sources (∼15,000 Curies) permit multiple‐beam, intensity‐modulated radiation therapy. The primary aim of this study is to assess the imaging stability, accuracy, and automatic segmentation algorithm capability to track motion in simulated and in vivo targets. Magnetic resonance imaging (MRI) characteristics of the system were assessed using the American College of Radiology (ACR)‐recommended phantom and accreditation protocol. Images of the ACR phantom were acquired using a head coil following the ACR scanning instructions. ACR recommended T1‐ and T2‐weighted sequences were evaluated. Nine measurements were performed over a period of seven months, on just over a monthly basis, to establish consistency. A silicon dielectric gel target was attached to the motor via a rod. 40 mm total amplitude was used with cycles of 3 to 9 s in length in a sinusoidal trajectory. Trajectories of six moving clinical targets in four canine patients were quantified and tracked. ACR phantom images were analyzed, and the results were compared with the ACR acceptance levels. Measured slice thickness accuracies were within the acceptance limits. In the 0.35 T system, the image intensity uniformity was also within the ACR acceptance limit. Over the range of cycle lengths, representing a wide range of breathing rates in patients imaged at four frames/s, excellent agreement was observed between the expected and measured target trajectories. In vivo canine targets, including the gross target volume (GTV), as well as other abdominal soft tissue structures, were visualized with inherent MR contrast, allowing for preliminary results of target tracking. PACS number: 87.61.Tg PMID:26699552
Saenz, Daniel L; Yan, Yue; Christensen, Neil; Henzler, Margaret A; Forrest, Lisa J; Bayouth, John E; Paliwal, Bhudatt R
2015-11-08
ViewRay is a novel MR-guided radiotherapy system capable of imaging in near real-time at four frames per second during treatment using 0.35T field strength. It allows for improved gating techniques and adaptive radiotherapy. Three cobalt-60 sources (~ 15,000 Curies) permit multiple-beam, intensity-modulated radiation therapy. The primary aim of this study is to assess the imaging stability, accuracy, and automatic segmentation algorithm capability to track motion in simulated and in vivo targets. Magnetic resonance imaging (MRI) characteristics of the system were assessed using the American College of Radiology (ACR)-recommended phantom and accreditation protocol. Images of the ACR phantom were acquired using a head coil following the ACR scanning instructions. ACR recommended T1- and T2-weighted sequences were evaluated. Nine measurements were performed over a period of seven months, on just over a monthly basis, to establish consistency. A silicon dielectric gel target was attached to the motor via a rod. 40 mm total amplitude was used with cycles of 3 to 9 s in length in a sinusoidal trajectory. Trajectories of six moving clinical targets in four canine patients were quantified and tracked. ACR phantom images were analyzed, and the results were compared with the ACR acceptance levels. Measured slice thickness accuracies were within the acceptance limits. In the 0.35 T system, the image intensity uniformity was also within the ACR acceptance limit. Over the range of cycle lengths, representing a wide range of breathing rates in patients imaged at four frames/s, excellent agreement was observed between the expected and measured target trajectories. In vivo canine targets, including the gross target volume (GTV), as well as other abdominal soft tissue structures, were visualized with inherent MR contrast, allowing for preliminary results of target tracking.
Longhi, Michele; Ricciardi, Giuseppe; Tommasi, Giorgio; Nicolato, Antonio; Foroni, Roberto; Bertolasi, Laura; Beltramello, Alberto; Moretto, Giuseppe; Tinazzi, Michele; Gerosa, Massimo
2015-05-01
Chronic stimulation of the human subthalamic nucleus (STN) is gradually becoming accepted as a long-term therapeutic option for patients with advanced Parkinson disease (PD). 3Tesla (T) magnetic resonance imaging (MRI) improves contrast resolution in basal ganglia nuclei containing high levels of iron, because of magnetic susceptibility effects that increase significantly as the magnetic field gets higher. This phenomenon can be used for better visualization of the STN and may reduce the time necessary for detailed microrecording (MER) mapping, increasing surgery efficacy and lowering morbidity. The objective of this retrospective study is to analyze a population of 20 deep brain stimulation (DBS) electrode implanted patients with PD divided into two groups in which different targeting methods were used. Mean age was 56 years (range 37 to 69 years). Mean disease duration was 11.6 years. Mean follow-up was 12 months (range 6 to 36 months). Patients were divided into two groups: Group A contained 6 patients who underwent STN targeting using 1T stereotactic (T1w + T2w) MRI plus STN indirect atlas derived targeting. Group B consisted of 14 patients who underwent STN targeting using 3T nonstereotactic (T2w) MRI fused with 1T T1w stereotactic MRI and STN direct targeting. For statistical analysis, we compared (five different parameters in both (matched) groups: Unified Parkinson's disease rating scale (UPDRS) score reduction (medication off before surgery against stimulation on/medication off after surgery), postoperative drug reduction, duration of surgery, the "central preoperative track" chosen as final implantation track during surgery, and correspondence between the targeted STN and the intraoperative neurophysiologic data. Mean UPDRS III score reduction (medication off/stimulation on versus preoperative medication off) was 69% in Group A and 74% in Group B (p = 0.015, log-rank test) respectively. Postoperatively, antiparkinsonian treatment was reduced by 66% in Group A and 75% in Group B (p = 0.006, log-rank test). The preoperative "central" track (which corresponds to ideal STN targeting) proved to be the most clinically effective in 2/12 leads for Group A versus 21/28 for Group B (p < 0.001).Neurophysiologic data confirmed these results; the hypothetical target was confirmed by MER data in 76% of tracks in Group A, and in 75% of tracks in Group B (p < 0.001, univariate and multivariate analysis). 3T MRI appears to be a useful tool in STN-DBS preoperative targeting. Neurophysiologic testing remains fundamental to determine lead deepness (and prevent clinical side effects. Georg Thieme Verlag KG Stuttgart · New York.
Augmented virtuality for arthroscopic knee surgery.
Li, John M; Bardana, Davide D; Stewart, A James
2011-01-01
This paper describes a computer system to visualize the location and alignment of an arthroscope using augmented virtuality. A 3D computer model of the patient's joint (from CT) is shown, along with a model of the tracked arthroscopic probe and the projection of the camera image onto the virtual joint. A user study, using plastic bones instead of live patients, was made to determine the effectiveness of this navigated display; the study showed that the navigated display improves target localization in novice residents.
2014-07-25
ISS040-E-079083 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson, Expedition 40 commander, enters data in a computer in preparation for a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.
2014-07-25
ISS040-E-080130 (25 July 2014) --- In the International Space Station?s Kibo laboratory, European Space Agency astronaut Alexander Gerst, Expedition 40 flight engineer, conducts a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.
Ye, Tao; Zhou, Fuqiang
2015-04-10
When imaged by detectors, space targets (including satellites and debris) and background stars have similar point-spread functions, and both objects appear to change as detectors track targets. Therefore, traditional tracking methods cannot separate targets from stars and cannot directly recognize targets in 2D images. Consequently, we propose an autonomous space target recognition and tracking approach using a star sensor technique and a Kalman filter (KF). A two-step method for subpixel-scale detection of star objects (including stars and targets) is developed, and the combination of the star sensor technique and a KF is used to track targets. The experimental results show that the proposed method is adequate for autonomously recognizing and tracking space targets.
Intraoperative visualization and assessment of electromagnetic tracking error
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Ungi, Tamas; Lasso, Andras; MacDonald, Andrew; Nanji, Sulaiman; Fichtinger, Gabor
2015-03-01
Electromagnetic tracking allows for increased flexibility in designing image-guided interventions, however it is well understood that electromagnetic tracking is prone to error. Visualization and assessment of the tracking error should take place in the operating room with minimal interference with the clinical procedure. The goal was to achieve this ideal in an open-source software implementation in a plug and play manner, without requiring programming from the user. We use optical tracking as a ground truth. An electromagnetic sensor and optical markers are mounted onto a stylus device, pivot calibrated for both trackers. Electromagnetic tracking error is defined as difference of tool tip position between electromagnetic and optical readings. Multiple measurements are interpolated into the thin-plate B-spline transform visualized in real time using 3D Slicer. All tracked devices are used in a plug and play manner through the open-source SlicerIGT and PLUS extensions of the 3D Slicer platform. Tracking error was measured multiple times to assess reproducibility of the method, both with and without placing ferromagnetic objects in the workspace. Results from exhaustive grid sampling and freehand sampling were similar, indicating that a quick freehand sampling is sufficient to detect unexpected or excessive field distortion in the operating room. The software is available as a plug-in for the 3D Slicer platforms. Results demonstrate potential for visualizing electromagnetic tracking error in real time for intraoperative environments in feasibility clinical trials in image-guided interventions.
Lack of Free Choice Reveals the Cost of Having to Search for More Than One Object
Ort, Eduard; Fahrenfort, Johannes J.; Olivers, Christian N. L.
2017-01-01
It is debated whether people can actively search for more than one object or whether this results in switch costs. Using a gaze-contingent eye-tracking paradigm, we revealed a crucial role for cognitive control in multiple-target search. We instructed participants to simultaneously search for two target objects presented among distractors. In one condition, both targets were available, which gave the observer free choice of what to search for and allowed for proactive control. In the other condition, only one of the two targets was available, so that the choice was imposed, and a reactive mechanism would be required. No switch costs emerged when target choice was free, but switch costs emerged reliably when targets were imposed. Bridging contradictory findings, the results are consistent with models of visual selection in which only one attentional template actively drives selection and in which the efficiency of switching targets depends on the type of cognitive control allowed for by the environment. PMID:28661761
Research on target tracking algorithm based on spatio-temporal context
NASA Astrophysics Data System (ADS)
Li, Baiping; Xu, Sanmei; Kang, Hongjuan
2017-07-01
In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.
Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I
2014-01-01
Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300
Testing of visual field with virtual reality goggles in manual and visual grasp modes.
Wroblewski, Dariusz; Francis, Brian A; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas
2014-01-01
Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4-6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.
NASA Astrophysics Data System (ADS)
Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu
2016-05-01
The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.
Research on infrared small-target tracking technology under complex background
NASA Astrophysics Data System (ADS)
Liu, Lei; Wang, Xin; Chen, Jilu; Pan, Tao
2012-10-01
In this paper, some basic principles and the implementing flow charts of a series of algorithms for target tracking are described. On the foundation of above works, a moving target tracking software base on the OpenCV is developed by the software developing platform MFC. Three kinds of tracking algorithms are integrated in this software. These two tracking algorithms are Kalman Filter tracking method and Camshift tracking method. In order to explain the software clearly, the framework and the function are described in this paper. At last, the implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. This paper is very significant in the application of the infrared target tracking technology.
NASA Technical Reports Server (NTRS)
Das, V. E.; Thomas, C. W.; Zivotofsky, A. Z.; Leigh, R. J.
1996-01-01
Video-based eye-tracking systems are especially suited to studying eye movements during naturally occurring activities such as locomotion, but eye velocity records suffer from broad band noise that is not amenable to conventional filtering methods. We evaluated the effectiveness of combined median and moving-average filters by comparing prefiltered and postfiltered records made synchronously with a video eye-tracker and the magnetic search coil technique, which is relatively noise free. Root-mean-square noise was reduced by half, without distorting the eye velocity signal. To illustrate the practical use of this technique, we studied normal subjects and patients with deficient labyrinthine function and compared their ability to hold gaze on a visual target that moved with their heads (cancellation of the vestibulo-ocular reflex). Patients and normal subjects performed similarly during active head rotation but, during locomotion, patients held their eyes more steadily on the visual target than did subjects.
Infrared small target tracking based on SOPC
NASA Astrophysics Data System (ADS)
Hu, Taotao; Fan, Xiang; Zhang, Yu-Jin; Cheng, Zheng-dong; Zhu, Bin
2011-01-01
The paper presents a low cost FPGA based solution for a real-time infrared small target tracking system. A specialized architecture is presented based on a soft RISC processor capable of running kernel based mean shift tracking algorithm. Mean shift tracking algorithm is realized in NIOS II soft-core with SOPC (System on a Programmable Chip) technology. Though mean shift algorithm is widely used for target tracking, the original mean shift algorithm can not be directly used for infrared small target tracking. As infrared small target only has intensity information, so an improved mean shift algorithm is presented in this paper. How to describe target will determine whether target can be tracked by mean shift algorithm. Because color target can be tracked well by mean shift algorithm, imitating color image expression, spatial component and temporal component are advanced to describe target, which forms pseudo-color image. In order to improve the processing speed parallel technology and pipeline technology are taken. Two RAM are taken to stored images separately by ping-pong technology. A FLASH is used to store mass temp data. The experimental results show that infrared small target is tracked stably in complicated background.
Huettig, Falk; Altmann, Gerry T M
2011-01-01
Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.
A novel visual saliency detection method for infrared video sequences
NASA Astrophysics Data System (ADS)
Wang, Xin; Zhang, Yuzhen; Ning, Chen
2017-12-01
Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.
Hayhoe, Mary M; Matthis, Jonathan Samir
2018-08-06
The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.
Slushy weightings for the optimal pilot model. [considering visual tracking task
NASA Technical Reports Server (NTRS)
Dillow, J. D.; Picha, D. G.; Anderson, R. O.
1975-01-01
A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.
Good Features to Correlate for Visual Tracking
NASA Astrophysics Data System (ADS)
Gundogdu, Erhan; Alatan, A. Aydin
2018-05-01
During the recent years, correlation filters have shown dominant and spectacular results for visual object tracking. The types of the features that are employed in these family of trackers significantly affect the performance of visual tracking. The ultimate goal is to utilize robust features invariant to any kind of appearance change of the object, while predicting the object location as properly as in the case of no appearance change. As the deep learning based methods have emerged, the study of learning features for specific tasks has accelerated. For instance, discriminative visual tracking methods based on deep architectures have been studied with promising performance. Nevertheless, correlation filter based (CFB) trackers confine themselves to use the pre-trained networks which are trained for object classification problem. To this end, in this manuscript the problem of learning deep fully convolutional features for the CFB visual tracking is formulated. In order to learn the proposed model, a novel and efficient backpropagation algorithm is presented based on the loss function of the network. The proposed learning framework enables the network model to be flexible for a custom design. Moreover, it alleviates the dependency on the network trained for classification. Extensive performance analysis shows the efficacy of the proposed custom design in the CFB tracking framework. By fine-tuning the convolutional parts of a state-of-the-art network and integrating this model to a CFB tracker, which is the top performing one of VOT2016, 18% increase is achieved in terms of expected average overlap, and tracking failures are decreased by 25%, while maintaining the superiority over the state-of-the-art methods in OTB-2013 and OTB-2015 tracking datasets.
Vater, Christian; Kredel, Ralf; Hossner, Ernst-Joachim
2017-05-01
In the current study, dual-task performance is examined with multiple-object tracking as a primary task and target-change detection as a secondary task. The to-be-detected target changes in conditions of either change type (form vs. motion; Experiment 1) or change salience (stop vs. slowdown; Experiment 2), with changes occurring at either near (5°-10°) or far (15°-20°) eccentricities (Experiments 1 and 2). The aim of the study was to test whether changes can be detected solely with peripheral vision. By controlling for saccades and computing gaze distances, we could show that participants used peripheral vision to monitor the targets and, additionally, to perceive changes at both near and far eccentricities. Noticeably, gaze behavior was not affected by the actual target change. Detection rates as well as response times generally varied as a function of change condition and eccentricity, with faster detections for motion changes and near changes. However, in contrast to the effects found for motion changes, sharp declines in detection rates and increased response times were observed for form changes as a function of the eccentricities. This result can be ascribed to properties of the visual system, namely to the limited spatial acuity in the periphery and the comparably receptive motion sensitivity of peripheral vision. These findings show that peripheral vision is functional for simultaneous target monitoring and target-change detection as saccadic information suppression can be avoided and covert attention can be optimally distributed to all targets. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Proposals of observations with the space telescope in the domain of astrometry
NASA Astrophysics Data System (ADS)
Fresneau, A.
The use of the Hubble Space Telescope for astrometry is advertised at the same level as for photometry, spectroscopy, or polarimetry. The prime instrument to be used for that goal is one of the three fine guidance sensors. The interferometric design of the stellar sensor is adequate for stellar diameter measurements (>0.01 arcsec) close binaries separation determination (<0.1 arcsec) and differential astrometry on targets in a field of view of 60 square arcmin and in the visual magnitude range from 3 to 18. Moving targets brighter than 14 with an apparent motion slower than 150 arcsec per hour can be tracked at the same level of accuracy.
Real-time active MR-tracking of metallic stylets in MR-guided radiation therapy
Wang, Wei; Dumoulin, Charles L.; Viswanathan, Akila N.; Tse, Zion T. H.; Mehrtash, Alireza; Loew, Wolfgang; Norton, Isaiah; Tokuda, Junichi; Seethamraju, Ravi T.; Kapur, Tina; Damato, Antonio L.; Cormack, Robert A.; Schmidt, Ehud J.
2014-01-01
Purpose To develop an active MR-tracking system to guide placement of metallic devices for radiation therapy. Methods An actively tracked metallic stylet for brachytherapy was constructed by adding printed-circuit micro-coils to a commercial stylet. The coil design was optimized by electromagnetic simulation, and has a radio-frequency lobe pattern extending ~5 mm beyond the strong B0 inhomogeneity region near the metal surface. An MR-tracking sequence with phase-field dithering was used to overcome residual effects of B0 and B1 inhomogeneities caused by the metal, as well as from inductive coupling to surrounding metallic stylets. The tracking system was integrated with a graphical workstation for real-time visualization. 3T MRI catheter-insertion procedures were tested in phantoms and ex-vivo animal tissue, and then performed in three patients during interstitial brachytherapy. Results The tracking system provided high-resolution (0.6 × 0.6 × 0.6 mm3) and rapid (16 to 40 frames per second, with three to one phase-field dithering directions) catheter localization in phantoms, animals, and three gynecologic cancer patients. Conclusion This is the first demonstration of active tracking of the shaft of metallic stylet in MR-guided brachytherapy. It holds the promise of assisting physicians to achieve better targeting and improving outcomes in interstitial brachytherapy. PMID:24903165
Gutova, Margarita; Frank, Joseph A.; D'Apuzzo, Massimo; Khankaldyyan, Vazgen; Gilchrist, Megan M.; Annala, Alexander J.; Metz, Marianne Z.; Abramyants, Yelena; Herrmann, Kelsey A.; Ghoda, Lucy Y.; Najbauer, Joseph; Brown, Christine E.; Blanchard, M. Suzette; Lesniak, Maciej S.; Kim, Seung U.; Barish, Michael E.
2013-01-01
Numerous stem cell-based therapies are currently under clinical investigation, including the use of neural stem cells (NSCs) as delivery vehicles to target therapeutic agents to invasive brain tumors. The ability to monitor the time course, migration, and distribution of stem cells following transplantation into patients would provide critical information for optimizing treatment regimens. No effective cell-tracking methodology has yet garnered clinical acceptance. A highly promising noninvasive method for monitoring NSCs and potentially other cell types in vivo involves preloading them with ultrasmall superparamagnetic iron oxide nanoparticles (USPIOs) to enable cell tracking using magnetic resonance imaging (MRI). We report here the preclinical studies that led to U.S. Food and Drug Administration approval for first-in-human investigational use of ferumoxytol to label NSCs prior to transplantation into brain tumor patients, followed by surveillance serial MRI. A combination of heparin, protamine sulfate, and ferumoxytol (HPF) was used to label the NSCs. HPF labeling did not affect cell viability, growth kinetics, or tumor tropism in vitro, and it enabled MRI visualization of NSC distribution within orthotopic glioma xenografts. MRI revealed dynamic in vivo NSC distribution at multiple time points following intracerebral or intravenous injection into glioma-bearing mice that correlated with histological analysis. Preclinical safety/toxicity studies of intracerebrally administered HPF-labeled NSCs in mice were also performed, and they showed no significant clinical or behavioral changes, no neuronal or systemic toxicities, and no abnormal accumulation of iron in the liver or spleen. These studies support the clinical use of ferumoxytol labeling of cells for post-transplant MRI visualization and tracking. PMID:24014682
Han, Xuemei; Smyth, Regan L.; Young, Bruce E.; Brooks, Thomas M.; Sánchez de Lozada, Alexandra; Bubb, Philip; Butchart, Stuart H. M.; Larsen, Frank W.; Hamilton, Healy; Hansen, Matthew C.; Turner, Will R.
2014-01-01
Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world's governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity's “Aichi Targets”. These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong) and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity “dashboard” – a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate), state of species (Red List Index), conservation response (protection of key biodiversity areas), and benefits to human populations (freshwater provision). Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the protection of natural resources. PMID:25409183
The role of "rescue saccades" in tracking objects through occlusions.
Zelinsky, Gregory J; Todor, Andrei
2010-12-29
We hypothesize that our ability to track objects through occlusions is mediated by timely assistance from gaze in the form of "rescue saccades"-eye movements to tracked objects that are in danger of being lost due to impending occlusion. Observers tracked 2-4 target sharks (out of 9) for 20 s as they swam through a rendered 3D underwater scene. Targets were either allowed to enter into occlusions (occlusion trials) or not (no occlusion trials). Tracking accuracy with 2-3 targets was ≥ 92% regardless of target occlusion but dropped to 74% on occlusion trials with four targets (no occlusion trials remained accurate; 83%). This pattern was mirrored in the frequency of rescue saccades. Rescue saccades accompanied approximatlely 50% of the Track 2-3 target occlusions, but only 34% of the Track 4 occlusions. Their frequency also decreased with increasing distance between a target and the nearest other object, suggesting that it is the potential for target confusion that summons a rescue saccade, not occlusion itself. These findings provide evidence for a tracking system that monitors for events that might cause track loss (e.g., occlusions) and requests help from the oculomotor system to resolve these momentary crises. As the number of crises increase with the number of targets, some requests for help go unsatisfied, resulting in degraded tracking.
System considerations for detection and tracking of small targets using passive sensors
NASA Astrophysics Data System (ADS)
DeBell, David A.
1991-08-01
Passive sensors provide only a few discriminants to assist in threat assessment of small targets. Tracking of the small targets provides additional discriminants. This paper discusses the system considerations for tracking small targets using passive sensors, in particular EO sensors. Tracking helps establish good versus bad detections. Discussed are the requirements to be placed on the sensor system's accuracy, with respect to knowledge of the sightline direction. The detection of weak targets sets a requirement for two levels of tracking in order to reduce processor throughput. A system characteristic is the need to track all detections. For low thresholds, this can mean a heavy track burden. Therefore, thresholds must be adaptive in order not to saturate the processors. Second-level tracks must develop a range estimate in order to assess threat. Sensor platform maneuvers are required if the targets are moving. The need for accurate pointing, good stability, and a good update rate will be shown quantitatively, relating to track accuracy and track association.
Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael
2013-09-01
Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.
Effects of directional uncertainty on visually-guided joystick pointing.
Berryhill, Marian; Kveraga, Kestutis; Hughes, Howard C
2005-02-01
Reaction times generally follow the predictions of Hick's law as stimulus-response uncertainty increases, although notable exceptions include the oculomotor system. Saccadic and smooth pursuit eye movement reaction times are independent of stimulus-response uncertainty. Previous research showed that joystick pointing to targets, a motor analog of saccadic eye movements, is only modestly affected by increased stimulus-response uncertainty; however, a no-uncertainty condition (simple reaction time to 1 possible target) was not included. Here, we re-evaluate manual joystick pointing including a no-uncertainty condition. Analysis indicated simple joystick pointing reaction times were significantly faster than choice reaction times. Choice reaction times (2, 4, or 8 possible target locations) only slightly increased as the number of possible targets increased. These data suggest that, as with joystick tracking (a motor analog of smooth pursuit eye movements), joystick pointing is more closely approximated by a simple/choice step function than the log function predicted by Hick's law.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
NASA Astrophysics Data System (ADS)
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Infrared measurement and composite tracking algorithm for air-breathing hypersonic vehicles
NASA Astrophysics Data System (ADS)
Zhang, Zhao; Gao, Changsheng; Jing, Wuxing
2018-03-01
Air-breathing hypersonic vehicles have capabilities of hypersonic speed and strong maneuvering, and thus pose a significant challenge to conventional tracking methodologies. To achieve desirable tracking performance for hypersonic targets, this paper investigates the problems related to measurement model design and tracking model mismatching. First, owing to the severe aerothermal effect of hypersonic motion, an infrared measurement model in near space is designed and analyzed based on target infrared radiation and an atmospheric model. Second, using information from infrared sensors, a composite tracking algorithm is proposed via a combination of the interactive multiple models (IMM) algorithm, fitting dynamics model, and strong tracking filter. During the procedure, the IMMs algorithm generates tracking data to establish a fitting dynamics model of the target. Then, the strong tracking unscented Kalman filter is employed to estimate the target states for suppressing the impact of target maneuvers. Simulations are performed to verify the feasibility of the presented composite tracking algorithm. The results demonstrate that the designed infrared measurement model effectively and continuously observes hypersonic vehicles, and the proposed composite tracking algorithm accurately and stably tracks these targets.
Role of Oculoproprioception in Coding the Locus of Attention.
Odoj, Bartholomaeus; Balslev, Daniela
2016-03-01
The most common neural representations for spatial attention encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be combined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allocation of attention, the source of this input has so far remained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculoproprioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants discriminated visual targets whose location was cued in a nonvisual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculoproprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculoproprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention.
Brain systems for visual perspective taking and action perception.
Mazzarella, Elisabetta; Ramsey, Richard; Conson, Massimiliano; Hamilton, Antonia
2013-01-01
Taking another person's viewpoint and making sense of their actions are key processes that guide social behavior. Previous neuroimaging investigations have largely studied these processes separately. The current study used functional magnetic resonance imaging to examine how the brain incorporates another person's viewpoint and actions into visual perspective judgments. Participants made a left-right judgment about the location of a target object from their own (egocentric) or an actor's visual perspective (altercentric). Actor location varied around a table and the actor was either reaching or not reaching for the target object. Analyses examined brain regions engaged in the egocentric and altercentric tasks, brain regions where response magnitude tracked the orientation of the actor in the scene and brain regions sensitive to the action performed by the actor. The blood oxygen level-dependent (BOLD) response in dorsomedial prefrontal cortex (dmPFC) was sensitive to actor orientation in the altercentric task, whereas the response in right inferior frontal gyrus (IFG) was sensitive to actor orientation in the egocentric task. Thus, dmPFC and right IFG may play distinct but complementary roles in visual perspective taking (VPT). Observation of a reaching actor compared to a non-reaching actor yielded activation in lateral occipitotemporal cortex, regardless of task, showing that these regions are sensitive to body posture independent of social context. By considering how an observed actor's location and action influence the neural bases of visual perspective judgments, the current study supports the view that multiple neurocognitive "routes" operate during VPT.
A Computational Model of Spatial Development
NASA Astrophysics Data System (ADS)
Hiraki, Kazuo; Sashima, Akio; Phillips, Steven
Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.
Vision-based sensing for autonomous in-flight refueling
NASA Astrophysics Data System (ADS)
Scott, D.; Toal, M.; Dale, J.
2007-04-01
A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.
Lykins, Amy D; Meana, Marta; Kambe, Gretchen
2006-10-01
As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.
Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Purpose Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that calibration can be performed in the OR on demand. Methods We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration result in the OR, we integrated a tube phantom with fCalib and overlaid a virtual representation of the tube on the live video scene. Results We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggested that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, would affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s – 22.7 s). Conclusions We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand. PMID:27250853
Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2016-06-01
Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.
Eye-Catching Odors: Olfaction Elicits Sustained Gazing to Faces and Eyes in 4-Month-Old Infants
Lewkowicz, David J.; Goubet, Nathalie; Schaal, Benoist
2013-01-01
This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues. PMID:24015175
Grubert, Anna; Indino, Marcello; Krummenacher, Joseph
2014-01-01
In an experiment involving a total of 124 participants, divided into eight age groups (6-, 8-, 10-, 12-, 14-, 16-, 18-, and 20-year-olds) the development of the processing components underlying visual search for pop-out targets was tracked. Participants indicated the presence or absence of color or orientation feature singleton targets. Observers also solved a detection task, in which they responded to the onset of search arrays. There were two main results. First, analyses of inter-trial effects revealed differences in the search strategies of the 6-year-old participants compared to older age groups. Participants older than 8 years based target detection on feature-less dimensional salience signals (indicated by cross-trial RT costs in target dimension change relative to repetition trials), the 6-year-olds accessed the target feature to make a target present or absent decision (cross-trial RT costs in target feature change relative to feature repetition trials). The result agrees with predictions derived from the Dimension Weighting account and previous investigations of inter-trial effects in adult observers (Müller et al., 1995; Found and Müller, 1996). The results are also in line with theories of cognitive development suggesting that the ability to abstract specific visual features into feature categories is developed after the age of 7 years. Second, overall search RTs decreased with increasing age in a decelerated fashion. RT differences between consecutive age groups can be explained by sensory-motor maturation up to the age of 10 years (as indicated by RTs in the onset detection task). Expedited RTs in older age groups (10-, vs. 12-year-olds; 14- vs. 16-year-olds), but also in the 6- vs. 8-year-olds, are due to the development of search-related (cognitive) processes. Overall, the results suggest that the level of adult performance in visual search for pop-out targets is achieved by the age of 16.
Grubert, Anna; Indino, Marcello; Krummenacher, Joseph
2014-01-01
In an experiment involving a total of 124 participants, divided into eight age groups (6-, 8-, 10-, 12-, 14-, 16-, 18-, and 20-year-olds) the development of the processing components underlying visual search for pop-out targets was tracked. Participants indicated the presence or absence of color or orientation feature singleton targets. Observers also solved a detection task, in which they responded to the onset of search arrays. There were two main results. First, analyses of inter-trial effects revealed differences in the search strategies of the 6-year-old participants compared to older age groups. Participants older than 8 years based target detection on feature-less dimensional salience signals (indicated by cross-trial RT costs in target dimension change relative to repetition trials), the 6-year-olds accessed the target feature to make a target present or absent decision (cross-trial RT costs in target feature change relative to feature repetition trials). The result agrees with predictions derived from the Dimension Weighting account and previous investigations of inter-trial effects in adult observers (Müller et al., 1995; Found and Müller, 1996). The results are also in line with theories of cognitive development suggesting that the ability to abstract specific visual features into feature categories is developed after the age of 7 years. Second, overall search RTs decreased with increasing age in a decelerated fashion. RT differences between consecutive age groups can be explained by sensory-motor maturation up to the age of 10 years (as indicated by RTs in the onset detection task). Expedited RTs in older age groups (10-, vs. 12-year-olds; 14- vs. 16-year-olds), but also in the 6- vs. 8-year-olds, are due to the development of search-related (cognitive) processes. Overall, the results suggest that the level of adult performance in visual search for pop-out targets is achieved by the age of 16. PMID:24910627
Knoeferle, Pia; Carminati, Maria Nella; Abashidze, Dato; Essig, Kai
2011-01-01
Eye-tracking findings suggest people prefer to ground their spoken language comprehension by focusing on recently seen events more than anticipating future events: When the verb in NP1-VERB-ADV-NP2 sentences was referentially ambiguous between a recently depicted and an equally plausible future clipart action, listeners fixated the target of the recent action more often at the verb than the object that hadn’t yet been acted upon. We examined whether this inspection preference generalizes to real-world events, and whether it is (vs. isn’t) modulated by how often people see recent and future events acted out. In a first eye-tracking study, the experimenter performed an action (e.g., sugaring pancakes), and then a spoken sentence either referred to that action or to an equally plausible future action (e.g., sugaring strawberries). At the verb, people more often inspected the pancakes (the recent target) than the strawberries (the future target), thus replicating the recent-event preference with these real-world actions. Adverb tense, indicating a future versus past event, had no effect on participants’ visual attention. In a second study we increased the frequency of future actions such that participants saw 50/50 future and recent actions. During the verb people mostly inspected the recent action target, but subsequently they began to rely on tense, and anticipated the future target more often for future than past tense adverbs. A corpus study showed that the verbs and adverbs indicating past versus future actions were equally frequent, suggesting long-term frequency biases did not cause the recent-event preference. Thus, (a) recent real-world actions can rapidly influence comprehension (as indexed by eye gaze to objects), and (b) people prefer to first inspect a recent action target (vs. an object that will soon be acted upon), even when past and future actions occur with equal frequency. A simple frequency-of-experience account cannot accommodate these findings. PMID:22207858
Electromagnetic tracking for abdominal interventions in computer aided surgery
Zhang, Hui; Banovac, Filip; Lin, Ralph; Glossop, Neil; Wood, Bradford J.; Lindisch, David; Levy, Elliot; Cleary, Kevin
2014-01-01
Electromagnetic tracking has great potential for assisting physicians in precision placement of instruments during minimally invasive interventions in the abdomen, since electromagnetic tracking is not limited by the line-of-sight restrictions of optical tracking. A new generation of electromagnetic tracking has recently become available, with sensors small enough to be included in the tips of instruments. To fully exploit the potential of this technology, our research group has been developing a computer aided, image-guided system that uses electromagnetic tracking for visualization of the internal anatomy during abdominal interventions. As registration is a critical component in developing an accurate image-guided system, we present three registration techniques: 1) enhanced paired-point registration (time-stamp match registration and dynamic registration); 2) orientation-based registration; and 3) needle shape-based registration. Respiration compensation is another important issue, particularly in the abdomen, where respiratory motion can make precise targeting difficult. To address this problem, we propose reference tracking and affine transformation methods. Finally, we present our prototype navigation system, which integrates the registration, segmentation, path-planning and navigation functions to provide real-time image guidance in the clinical environment. The methods presented here have been tested with a respiratory phantom specially designed by our group and in swine animal studies under approved protocols. Based on these tests, we conclude that our system can provide quick and accurate localization of tracked instruments in abdominal interventions, and that it offers a user friendly display for the physician. PMID:16829506
Mert, Aygül; Kiesel, Barbara; Wöhrer, Adelheid; Martínez-Moreno, Mauricio; Minchev, Georgi; Furtner, Julia; Knosp, Engelbert; Wolfsberger, Stefan; Widhalm, Georg
2015-01-01
OBJECT Surgery of suspected low-grade gliomas (LGGs) poses a special challenge for neurosurgeons due to their diffusely infiltrative growth and histopathological heterogeneity. Consequently, neuronavigation with multimodality imaging data, such as structural and metabolic data, fiber tracking, and 3D brain visualization, has been proposed to optimize surgery. However, currently no standardized protocol has been established for multimodality imaging data in modern glioma surgery. The aim of this study was therefore to define a specific protocol for multimodality imaging and navigation for suspected LGG. METHODS Fifty-one patients who underwent surgery for a diffusely infiltrating glioma with nonsignificant contrast enhancement on MRI and available multimodality imaging data were included. In the first 40 patients with glioma, the authors retrospectively reviewed the imaging data, including structural MRI (contrast-enhanced T1-weighted, T2-weighted, and FLAIR sequences), metabolic images derived from PET, or MR spectroscopy chemical shift imaging, fiber tracking, and 3D brain surface/vessel visualization, to define standardized image settings and specific indications for each imaging modality. The feasibility and surgical relevance of this new protocol was subsequently prospectively investigated during surgery with the assistance of an advanced electromagnetic navigation system in the remaining 11 patients. Furthermore, specific surgical outcome parameters, including the extent of resection, histological analysis of the metabolic hotspot, presence of a new postoperative neurological deficit, and intraoperative accuracy of 3D brain visualization models, were assessed in each of these patients. RESULTS After reviewing these first 40 cases of glioma, the authors defined a specific protocol with standardized image settings and specific indications that allows for optimal and simultaneous visualization of structural and metabolic data, fiber tracking, and 3D brain visualization. This new protocol was feasible and was estimated to be surgically relevant during navigation-guided surgery in all 11 patients. According to the authors' predefined surgical outcome parameters, they observed a complete resection in all resectable gliomas (n = 5) by using contour visualization with T2-weighted or FLAIR images. Additionally, tumor tissue derived from the metabolic hotspot showed the presence of malignant tissue in all WHO Grade III or IV gliomas (n = 5). Moreover, no permanent postoperative neurological deficits occurred in any of these patients, and fiber tracking and/or intraoperative monitoring were applied during surgery in the vast majority of cases (n = 10). Furthermore, the authors found a significant intraoperative topographical correlation of 3D brain surface and vessel models with gyral anatomy and superficial vessels. Finally, real-time navigation with multimodality imaging data using the advanced electromagnetic navigation system was found to be useful for precise guidance to surgical targets, such as the tumor margin or the metabolic hotspot. CONCLUSIONS In this study, the authors defined a specific protocol for multimodality imaging data in suspected LGGs, and they propose the application of this new protocol for advanced navigation-guided procedures optimally in conjunction with continuous electromagnetic instrument tracking to optimize glioma surgery.
Penalty dynamic programming algorithm for dim targets detection in sensor systems.
Huang, Dayu; Xue, Anke; Guo, Yunfei
2012-01-01
In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations.
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.
Scherman Rydhög, Jonas; Riisgaard de Blanck, Steen; Josipovic, Mirjana; Irming Jølck, Rasmus; Larsen, Klaus Richter; Clementsen, Paul; Lars Andersen, Thomas; Poulsen, Per Rugaard; Fredberg Persson, Gitte; Munck Af Rosenschold, Per
2017-04-01
The purpose of this study was to estimate the uncertainty in voluntary deep-inspiration breath-hold (DIBH) radiotherapy for locally advanced non-small cell lung cancer (NSCLC) patients. Perpendicular fluoroscopic movies were acquired in free breathing (FB) and DIBH during a course of visually guided DIBH radiotherapy of nine patients with NSCLC. Patients had liquid markers injected in mediastinal lymph nodes and primary tumours. Excursion, systematic- and random errors, and inter-breath-hold position uncertainty were investigated using an image based tracking algorithm. A mean reduction of 2-6mm in marker excursion in DIBH versus FB was seen in the anterior-posterior (AP), left-right (LR) and cranio-caudal (CC) directions. Lymph node motion during DIBH originated from cardiac motion. The systematic- (standard deviation (SD) of all the mean marker positions) and random errors (root-mean-square of the intra-BH SD) during DIBH were 0.5 and 0.3mm (AP), 0.5 and 0.3mm (LR), 0.8 and 0.4mm (CC), respectively. The mean inter-breath-hold shifts were -0.3mm (AP), -0.2mm (LR), and -0.2mm (CC). Intra- and inter-breath-hold uncertainty of tumours and lymph nodes were small in visually guided breath-hold radiotherapy of NSCLC. Target motion could be substantially reduced, but not eliminated, using visually guided DIBH. Copyright © 2017 Elsevier B.V. All rights reserved.
Schwein, Adeline; Kramer, Benjamin; Chinnadurai, Ponraj; Virmani, Neha; Walker, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean
2018-04-01
Combining three-dimensional (3D) catheter control with electromagnetic (EM) tracking-based navigation significantly reduced fluoroscopy time and improved robotic catheter movement quality in a previous in vitro pilot study. The aim of this study was to expound on previous results and to expand the value of EM tracking with a novel feature, assistednavigation, allowing automatic catheter orientation and semiautomatic vessel cannulation. Eighteen users navigated a robotic catheter in an aortic aneurysm phantom using an EM guidewire and a modified 9F robotic catheter with EM sensors at the tip of both leader and sheath. All users cannulated two targets, the left renal artery and posterior gate, using four visualization modes: (1) Standard fluoroscopy (control). (2) 2D biplane fluoroscopy showing real-time virtual catheter localization and orientation from EM tracking. (3) 2D biplane fluoroscopy with novel EM assisted navigation allowing the user to define the target vessel. The robotic catheter orients itself automatically toward the target; the user then only needs to advance the guidewire following this predefined optimized path to catheterize the vessel. Then, while advancing the catheter over the wire, the assisted navigation automatically modifies catheter bending and rotation in order to ensure smooth progression, avoiding loss of wire access. (4) Virtual 3D representation of the phantom showing real-time virtual catheter localization and orientation. Standard fluoroscopy was always available; cannulation and fluoroscopy times were noted for every mode and target cannulation. Quality of catheter movement was assessed by measuring the number of submovements of the catheter using the 3D coordinates of the EM sensors. A t-test was used to compare the standard fluoroscopy mode against EM tracking modes. EM tracking significantly reduced the mean fluoroscopy time (P < .001) and the number of submovements (P < .02) for both cannulation tasks. For the posterior gate, mean cannulation time was also significantly reduced when using EM tracking (P < .001). The use of novel EM assisted navigation feature (mode 3) showed further reduced cannulation time for the posterior gate (P = .002) and improved quality of catheter movement for the left renal artery cannulation (P = .021). These results confirmed the findings of a prior study that highlighted the value of combining 3D robotic catheter control and 3D navigation to improve safety and efficiency of endovascular procedures. The novel EM assisted navigation feature augments the robotic master/slave concept with automated catheter orientation toward the target and shows promising results in reducing procedure time and improving catheter motion quality. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
A visual tracking method based on deep learning without online model updating
NASA Astrophysics Data System (ADS)
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-01-01
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-05-05
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.
Robotic System for MRI-Guided Stereotactic Neurosurgery
Li, Gang; Cole, Gregory A.; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Pilitsis, Julie G.; Fischer, Gregory S.
2015-01-01
Stereotaxy is a neurosurgical technique that can take several hours to reach a specific target, typically utilizing a mechanical frame and guided by preoperative imaging. An error in any one of the numerous steps or deviations of the target anatomy from the preoperative plan such as brain shift (up to 20 mm), may affect the targeting accuracy and thus the treatment effectiveness. Moreover, because the procedure is typically performed through a small burr hole opening in the skull that prevents tissue visualization, the intervention is basically “blind” for the operator with limited means of intraoperative confirmation that may result in reduced accuracy and safety. The presented system is intended to address the clinical needs for enhanced efficiency, accuracy, and safety of image-guided stereotactic neurosurgery for Deep Brain Stimulation (DBS) lead placement. The work describes a magnetic resonance imaging (MRI)-guided, robotically actuated stereotactic neural intervention system for deep brain stimulation procedure, which offers the potential of reducing procedure duration while improving targeting accuracy and enhancing safety. This is achieved through simultaneous robotic manipulation of the instrument and interactively updated in situ MRI guidance that enables visualization of the anatomy and interventional instrument. During simultaneous actuation and imaging, the system has demonstrated less than 15% signal-to-noise ratio (SNR) variation and less than 0.20% geometric distortion artifact without affecting the imaging usability to visualize and guide the procedure. Optical tracking and MRI phantom experiments streamline the clinical workflow of the prototype system, corroborating targeting accuracy with 3-axis root mean square error 1.38 ± 0.45 mm in tip position and 2.03 ± 0.58° in insertion angle. PMID:25376035
Real-time target tracking and locating system for UAV
NASA Astrophysics Data System (ADS)
Zhang, Chao; Tang, Linbo; Fu, Huiquan; Li, Maowen
2017-07-01
In order to achieve real-time target tracking and locating for UAV, a reliable processing system is built on the embedded platform. Firstly, the video image is acquired in real time by the photovoltaic system on the UAV. When the target information is known, KCF tracking algorithm is adopted to track the target. Then, the servo is controlled to rotate with the target, when the target is in the center of the image, the laser ranging module is opened to obtain the distance between the UAV and the target. Finally, to combine with UAV flight parameters obtained by BeiDou navigation system, through the target location algorithm to calculate the geodetic coordinates of the target. The results show that the system is stable for real-time tracking of targets and positioning.
NASA Technical Reports Server (NTRS)
Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)
1998-01-01
Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.
Robust visual tracking via multiple discriminative models with object proposals
NASA Astrophysics Data System (ADS)
Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin
2018-04-01
Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.
Differences in grip force control between young and late middle-aged adults.
Zheng, Lianrong; Li, Kunyang; Wang, Qian; Chen, Wenhui; Song, Rong; Liu, Guanzheng
2017-09-01
Grip force control is a crucial function for human to guarantee the quality of life. To examine the effects of age on grip force control, 10 young adults and 11 late middle-aged adults participated in visually guided tracking tasks using different target force levels (25, 50, and 75% of the subject's maximal grip force). Multiple measures were used to evaluate the tracking performance during force rising phase and force maintenance phase. The measurements include the rise time, fuzzy entropy, mean force percentage, coefficient of variation, and target deviation ratio. The results show that the maximal grip force was significantly lower in the late middle-aged adults than in the young adults. The time of rising phase was systematically longer among late middle-aged adults. The fuzzy entropy is a useful indicator for quantitating the force variability of the grip force signal at higher force levels. These results suggest that the late middle-aged adults applied a compensatory strategy that allow allows for sufficient time to reach the required grip force and reduce the impact of the early and subtle degenerative changes in hand motor function.
Williamson, Ross S.; Hancock, Kenneth E.; Shinn-Cunningham, Barbara G.; Polley, Daniel B.
2015-01-01
SUMMARY Active search is a ubiquitous goal-driven behavior wherein organisms purposefully investigate the sensory environment to locate a target object. During active search, brain circuits analyze a stream of sensory information from the external environment, adjusting for internal signals related to self-generated movement or “top-down” weighting of anticipated target and distractor properties. Sensory responses in the cortex can be modulated by internal state [1–9], though the extent and form of modulation arising in the cortex de novo versus an inheritance from subcortical stations is not clear [4, 8–12]. We addressed this question by simultaneously recording from auditory and visual regions of the thalamus (MG and LG, respectively) while mice used dynamic auditory or visual feedback to search for a hidden target within an annular track. Locomotion was associated with strongly suppressed responses and reduced decoding accuracy in MG but a subtle increase in LG spiking. Because stimuli in one modality provided critical information about target location while the other served as a distractor, we could also estimate the importance of task relevance in both thalamic subdivisions. In contrast to the effects of locomotion, we found that LG responses were reduced overall yet decoded stimuli more accurately when vision was behaviorally relevant, whereas task relevance had little effect on MG responses. This double dissociation between the influences of task relevance and movement in MG and LG highlights a role for extrasensory modulation in the thalamus but also suggests key differences in the organization of modulatory circuitry between the auditory and visual pathways. PMID:26119749
NASA Technical Reports Server (NTRS)
Krauzlis, R. J.; Stone, L. S.
1999-01-01
The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.
Visual-auditory integration for visual search: a behavioral study in barn owls
Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram
2015-01-01
Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls’ heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam’s video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target. PMID:25762905
First saccadic eye movement reveals persistent attentional guidance by implicit learning
Jiang, Yuhong V.; Won, Bo-Yeong; Swallow, Khena M.
2014-01-01
Implicit learning about where a visual search target is likely to appear often speeds up search. However, whether implicit learning guides spatial attention or affects post-search decisional processes remains controversial. Using eye tracking, this study provides compelling evidence that implicit learning guides attention. In a training phase, participants often found the target in a high-frequency, “rich” quadrant of the display. When subsequently tested in a phase during which the target was randomly located, participants were twice as likely to direct the first saccadic eye movement to the previously rich quadrant than to any of the sparse quadrants. The attentional bias persisted for nearly 200 trials after training and was unabated by explicit instructions to distribute attention evenly. We propose that implicit learning guides spatial attention but in a qualitatively different manner than goal-driven attention. PMID:24512610
DOT National Transportation Integrated Search
1971-07-01
A previous CAMI laboratory investigation showed that alcohol impairs the ability of men to suppress vestibular nystagmus while visually fixating on a cockpit instrument, thus degrading visual tracking performance (eye-hand coordination) during angula...
Ruckenstein, Minna
2015-01-01
This chapter demonstrates how ethnographically-oriented research on emergent technologies, in this case self-tracking technologies, adds to Techno-Anthropology's aims of understanding techno-engagements and solving problems that deal with human-technology relations within and beyond health informatics. Everyday techno-relations have been a long-standing research interest in anthropology, underlining the necessity of empirical engagement with the ways in which people and technologies co-construct their daily conditions. By focusing on the uses of a food tracking application, MealLogger, designed for photographing meals and visualizing eating rhythms to share with health care professionals, the chapter details how personal data streams support and challenge health care practices. The interviewed professionals, from doctors to nutritionists, have used food tracking for treating patients with eating disorders, weight problems, and mental health issues. In general terms, self-tracking advances the practices of visually and temporally documenting, retrieving, communicating, and understanding physical and mental processes and, by doing so, it offers a new kind of visual mediation. The professionals point out how a visual food journal opens a window onto everyday life, bypassing customary ways of seeing and treating patients, thereby highlighting how self-tracking practices can aid in escaping the clinical gaze by promoting a new kind of communication through visualization and narration. Health care professionals are also, however, acutely aware of the barriers to adopting self-tracking practices as part of existing patient care. The health care system is neither used to, nor comfortable with, personal data that originates outside the system; it is not seen as evidence and its institutional position remains insecure.
Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.
Xiong, Chunshui; Huang, Lei; Liu, Changping
2014-01-01
Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.
Monfort, Matthias; Furlong, Eileen E M; Girardot, Charles
2017-07-15
Visualization of genomic data is fundamental for gaining insights into genome function. Yet, co-visualization of a large number of datasets remains a challenge in all popular genome browsers and the development of new visualization methods is needed to improve the usability and user experience of genome browsers. We present Dynamix, a JBrowse plugin that enables the parallel inspection of hundreds of genomic datasets. Dynamix takes advantage of a priori knowledge to automatically display data tracks with signal within a genomic region of interest. As the user navigates through the genome, Dynamix automatically updates data tracks and limits all manual operations otherwise needed to adjust the data visible on screen. Dynamix also introduces a new carousel view that optimizes screen utilization by enabling users to independently scroll through groups of tracks. Dynamix is hosted at http://furlonglab.embl.de/Dynamix . charles.girardot@embl.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
LEA Detection and Tracking Method for Color-Independent Visual-MIMO
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-01-01
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563
LEA Detection and Tracking Method for Color-Independent Visual-MIMO.
Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo
2016-07-02
Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.
Fly eye radar or micro-radar sensor technology
NASA Astrophysics Data System (ADS)
Molchanov, Pavlo; Asmolova, Olga
2014-05-01
To compensate for its eye's inability to point its eye at a target, the fly's eye consists of multiple angularly spaced sensors giving the fly the wide-area visual coverage it needs to detect and avoid the threats around him. Based on a similar concept a revolutionary new micro-radar sensor technology is proposed for detecting and tracking ground and/or airborne low profile low altitude targets in harsh urban environments. Distributed along a border or around a protected object (military facility and buildings, camp, stadium) small size, low power unattended radar sensors can be used for target detection and tracking, threat warning, pre-shot sniper protection and provides effective support for homeland security. In addition it can provide 3D recognition and targets classification due to its use of five orders more pulses than any scanning radar to each space point, by using few points of view, diversity signals and intelligent processing. The application of an array of directional antennas eliminates the need for a mechanical scanning antenna or phase processor. It radically decreases radar size and increases bearing accuracy several folds. The proposed micro-radar sensors can be easy connected to one or several operators by point-to-point invisible protected communication. The directional antennas have higher gain, can be multi-frequency and connected to a multi-functional network. Fly eye micro-radars are inexpensive, can be expendable and will reduce cost of defense.
Adaptive block online learning target tracking based on super pixel segmentation
NASA Astrophysics Data System (ADS)
Cheng, Yue; Li, Jianzeng
2018-04-01
Video target tracking technology under the unremitting exploration of predecessors has made big progress, but there are still lots of problems not solved. This paper proposed a new algorithm of target tracking based on image segmentation technology. Firstly we divide the selected region using simple linear iterative clustering (SLIC) algorithm, after that, we block the area with the improved density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm. Each sub-block independently trained classifier and tracked, then the algorithm ignore the failed tracking sub-block while reintegrate the rest of the sub-blocks into tracking box to complete the target tracking. The experimental results show that our algorithm can work effectively under occlusion interference, rotation change, scale change and many other problems in target tracking compared with the current mainstream algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ge, Y; Keall, P; Poulsen, P
Purpose: Multiple targets with large intrafraction independent motion are often involved in advanced prostate, lung, abdominal, and head and neck cancer radiotherapy. Current standard of care treats these with the originally planned fields, jeopardizing the treatment outcomes. A real-time multi-leaf collimator (MLC) tracking method has been developed to address this problem for the first time. This study evaluates the geometric uncertainty of the multi-target tracking method. Methods: Four treatment scenarios are simulated based on a prostate IMAT plan to treat a moving prostate target and static pelvic node target: 1) real-time multi-target MLC tracking; 2) real-time prostate-only MLC tracking; 3)more » correcting for prostate interfraction motion at setup only; and 4) no motion correction. The geometric uncertainty of the treatment is assessed by the sum of the erroneously underexposed target area and overexposed healthy tissue areas for each individual target. Two patient-measured prostate trajectories of average 2 and 5 mm motion magnitude are used for simulations. Results: Real-time multi-target tracking accumulates the least uncertainty overall. As expected, it covers the static nodes similarly well as no motion correction treatment and covers the moving prostate similarly well as the real-time prostate-only tracking. Multi-target tracking reduces >90% of uncertainty for the static nodal target compared to the real-time prostate-only tracking or interfraction motion correction. For prostate target, depending on the motion trajectory which affects the uncertainty due to leaf-fitting, multi-target tracking may or may not perform better than correcting for interfraction prostate motion by shifting patient at setup, but it reduces ∼50% of uncertainty compared to no motion correction. Conclusion: The developed real-time multi-target MLC tracking can adapt for the independently moving targets better than other available treatment adaptations. This will enable PTV margin reduction to minimize health tissue toxicity while remain tumor coverage when treating advanced disease with independently moving targets involved. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship and NHMRC Project Grant No. APP1042375.« less
Multi-camera real-time three-dimensional tracking of multiple flying animals
Straw, Andrew D.; Branson, Kristin; Neumann, Titus R.; Dickinson, Michael H.
2011-01-01
Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data. The additional capability of tracking in real time—with minimal latency—opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behaviour. Here, we describe a system capable of tracking the three-dimensional position and body orientation of animals such as flies and birds. The system operates with less than 40 ms latency and can track multiple animals simultaneously. To achieve these results, a multi-target tracking algorithm was developed based on the extended Kalman filter and the nearest neighbour standard filter data association algorithm. In one implementation, an 11-camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers. This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system. An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster. At low contrasts, speed is more variable and faster on average than at high contrasts. Thus, the system is already a useful tool to study the neurobiology and behaviour of freely flying animals. If combined with other techniques, such as ‘virtual reality’-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals. PMID:20630879
Computer-aided target tracking in motion analysis studies
NASA Astrophysics Data System (ADS)
Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.
1990-08-01
Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.
Chouinard, Philippe A.; Peel, Hayden J.; Landry, Oriane
2017-01-01
The closer a line extends toward a surrounding frame, the longer it appears. This is known as a framing effect. Over 70 years ago, Teodor Künnapas demonstrated that the shape of the visual field itself can act as a frame to influence the perceived length of lines in the vertical-horizontal illusion. This illusion is typically created by having a vertical line rise from the center of a horizontal line of the same length creating an inverted T figure. We aimed to determine if the degree to which one fixates on a spatial location where the two lines bisect could influence the strength of the illusion, assuming that the framing effect would be stronger when the retinal image is more stable. We performed two experiments: the visual-field and vertical-horizontal illusion experiments. The visual-field experiment demonstrated that the participants could discriminate a target more easily when it was presented along the horizontal vs. vertical meridian, confirming a framing influence on visual perception. The vertical-horizontal illusion experiment determined the effects of orientation, size and eye gaze on the strength of the illusion. As predicted, the illusion was strongest when the stimulus was presented in either its standard inverted T orientation or when it was rotated 180° compared to other orientations, and in conditions in which the retinal image was more stable, as indexed by eye tracking. Taken together, we conclude that the results provide support for Teodor Künnapas’ explanation of the vertical-horizontal illusion. PMID:28392764
Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma.
Kasneci, Enkelejda; Black, Alex A; Wood, Joanne M
2017-01-01
To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.
Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma
Black, Alex A.
2017-01-01
To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior. PMID:28293433
Hydrophobic pocket targeting probes for enteroviruses.
Martikainen, Mari; Salorinne, Kirsi; Lahtinen, Tanja; Malola, Sami; Permi, Perttu; Häkkinen, Hannu; Marjomäki, Varpu
2015-11-07
Visualization and tracking of viruses without compromising their functionality is crucial in order to understand virus targeting to cells and tissues, and to understand the subsequent subcellular steps leading to virus uncoating and replication. Enteroviruses are important human pathogens causing a vast number of acute infections, and are also suggested to contribute to the development of chronic diseases like type I diabetes. Here, we demonstrate a novel method to target site-specifically the hydrophobic pocket of enteroviruses. A probe, a derivative of Pleconaril, was developed and conjugated to various labels that enabled the visualization of enteroviruses under light and electron microscopes. The probe mildly stabilized the virus particle by increasing the melting temperature by 1-3 degrees, and caused a delay in the uncoating of the virus in the cellular endosomes, but could not however inhibit the receptor binding, cellular entry or infectivity of the virus. The hydrophobic pocket binding moiety of the probe was shown to bind to echovirus 1 particle by STD and tr-NOESY NMR methods. Furthermore, binding to echovirus 1 and Coxsackievirus A9, and to a lesser extent to Coxsackie virus B3 was verified by using a gold nanocluster labeled probe by TEM analysis. Molecular modelling suggested that the probe fits the hydrophobic pockets of EV1 and CVA9, but not of CVB3 as expected, correlating well with the variations in the infectivity and stability of the virus particles. EV1 conjugated to the fluorescent dye labeled probe was efficiently internalized into the cells. The virus-fluorescent probe conjugate accumulated in the cytoplasmic endosomes and caused infection starting from 6 hours onwards. Remarkably, before and during the time of replication, the fluorescent probe was seen to leak from the virus-positive endosomes and thus separate from the capsid proteins that were left in the endosomes. These results suggest that, like the physiological hydrophobic content, the probe may be released upon virus uncoating. Our results collectively thus show that the gold and fluorescently labeled probes may be used to track and visualize the studied enteroviruses during the early phases of infection opening new avenues to follow virus uncoating in cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myhra, S., E-mail: sverre.myhra@materials.ox.ac.uk; Chakalova, R.; Falzone, N.
A method for detection and characterization of single MeV α-particle and recoil tracks in PMMA photoresist by atomic force microscopy (AFM) analysis has been demonstrated. The energy deposition along the track is shown to lead to a latent pattern in the resist due to contrast reversal. It has been shown that the pattern, consisting of conical spikes, can be developed by conventional processing as a result of the dissolution rate of poly(methyl methacrylate) (PMMA) being greater than that for the modified material in the cylindrical volume of the track core. The spikes can be imaged and counted by routine AFMmore » analysis. Investigations by angular-resolved near-grazing incidence reveal additional tracks that correspond to recoil tracks. The observations have been correlated with modelling, and shown to be in qualitative agreement with prevailing descriptions of collision cascades. The results may be relevant to technologies that are based on detection and characterization of single energetic ions. In particular, the direct visualization of the collision cascade may allow more accurate estimates of the actual interaction volume, which in turn will permit more precise assessment of dose distribution of α-emitting radionuclides used for targeted radiotherapy. The results could also be relevant to other diagnostic or process technologies based on interaction of energetic ions with matter.« less
Space-based IR tracking bias removal using background star observations
NASA Astrophysics Data System (ADS)
Clemons, T. M., III; Chang, K. C.
2009-05-01
This paper provides the results of a proposed methodology for removing sensor bias from a space-based infrared (IR) tracking system through the use of stars detected in the background field of the tracking sensor. The tracking system consists of two satellites flying in a lead-follower formation tracking a ballistic target. Each satellite is equipped with a narrow-view IR sensor that provides azimuth and elevation to the target. The tracking problem is made more difficult due to a constant, non-varying or slowly varying bias error present in each sensor's line of sight measurements. As known stars are detected during the target tracking process, the instantaneous sensor pointing error can be calculated as the difference between star detection reading and the known position of the star. The system then utilizes a separate bias filter to estimate the bias value based on these detections and correct the target line of sight measurements to improve the target state vector. The target state vector is estimated through a Linearized Kalman Filter (LKF) for the highly non-linear problem of tracking a ballistic missile. Scenarios are created using Satellite Toolkit(C) for trajectories with associated sensor observations. Mean Square Error results are given for tracking during the period when the target is in view of the satellite IR sensors. The results of this research provide a potential solution to bias correction while simultaneously tracking a target.
Discriminative correlation filter tracking with occlusion detection
NASA Astrophysics Data System (ADS)
Zhang, Shuo; Chen, Zhong; Yu, XiPeng; Zhang, Ting; He, Jing
2018-03-01
Aiming at the problem that the correlation filter-based tracking algorithm can not track the target of severe occlusion, a target re-detection mechanism is proposed. First of all, based on the ECO, we propose the multi-peak detection model and the response value to distinguish the occlusion and deformation in the target tracking, which improve the success rate of tracking. And then we add the confidence model to update the mechanism to effectively prevent the model offset problem which due to similar targets or background during the tracking process. Finally, the redetection mechanism of the target is added, and the relocation is performed after the target is lost, which increases the accuracy of the target positioning. The experimental results demonstrate that the proposed tracker performs favorably against state-of-the-art methods in terms of robustness and accuracy.
A particle filter for multi-target tracking in track before detect context
NASA Astrophysics Data System (ADS)
Amrouche, Naima; Khenchaf, Ali; Berkani, Daoud
2016-10-01
The track-before-detect (TBD) approach can be used to track a single target in a highly noisy radar scene. This is because it makes use of unthresholded observations and incorporates a binary target existence variable into its target state estimation process when implemented as a particle filter (PF). This paper proposes the recursive PF-TBD approach to detect multiple targets in low-signal-to noise ratios (SNR). The algorithm's successful performance is demonstrated using a simulated two target example.
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase
Lu, Kelin; Zhou, Rui
2016-01-01
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications. PMID:27537883
Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase.
Lu, Kelin; Zhou, Rui
2016-08-15
A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications.
Testing of Visual Field with Virtual Reality Goggles in Manual and Visual Grasp Modes
Wroblewski, Dariusz; Francis, Brian A.; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas
2014-01-01
Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4–6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode. PMID:25050326
JOHN, KEVIN K.; JENSEN, JAKOB D.; KING, ANDY J.; RATCLIFF, CHELSEA L.; GROSSMAN, DOUGLAS
2017-01-01
Skin self-examination (SSE) consists of routinely checking the body for atypical moles that might be cancerous. Identifying atypical moles is a visual task; thus, SSE training materials utilize pattern-focused visuals to cultivate this skill. Despite widespread use, researchers have yet to explicate how pattern-focused visuals cultivate visual skill. Using eye tracking to capture the visual scanpaths of a sample of laypersons (N = 92), the current study employed a 2 (pattern: ABCDE vs. ugly duckling sign [UDS]) × 2 (presentation: photorealistic images vs. illustrations) factorial design to assess whether and how pattern-focused visuals can increase layperson accuracy in identifying atypical moles. Overall, illustrations resulted in greater sensitivity, while photos resulted in greater specificity. The UDS × photorealistic condition showed greatest specificity. For those in the photo condition with high self-efficacy, UDS increased specificity directly. For those in the photo condition with self-efficacy levels at the mean or lower, there was a conditional indirect effect such that these individuals spent a larger amount of their viewing time observing the atypical moles, and time on target was positively related to specificity. Illustrations provided significant gains in specificity for those with low-to-moderate self-efficacy by increasing total fixation time on the atypical moles. Findings suggest that maximizing visual processing efficiency could enhance existing SSE training techniques. PMID:28759333
360-Degree Visual Detection and Target Tracking on an Autonomous Surface Vehicle
2010-12-01
the fixed asset on successive passes through the patrol region. For example, Perera and Hoogs (2004) offer a change detection solution that operates... parameters used for the HVAP mission. we marginalize over the conditional dependence on the range: Pkd,j = ∫ ∞ −∞ fd (ρ) fN ( ρ |ρ̂kj , σ 2ρkj ) dρ, (1) where...observation volume and σ is the standard deviation of the innovation.4 To make the parameter a constant, for our application we further simplify with the
2014-07-25
ISS040-E-079355 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (foreground), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.
2014-07-25
ISS040-E-079129 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (left), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.
2014-07-25
ISS040-E-079910 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (left), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.
2014-07-25
ISS040-E-079332 (25 July 2014) --- In the International Space Station?s Kibo laboratory, NASA astronaut Steve Swanson (foreground), Expedition 40 commander; and European Space Agency astronaut Alexander Gerst, flight engineer, conduct a session with a trio of soccer-ball-sized robots known as the Synchronized Position Hold, Engage, Reorient, Experimental Satellites, or SPHERES. The free-flying robots were equipped with stereoscopic goggles called the Visual Estimation and Relative Tracking for Inspection of Generic Objects, or VERTIGO, to enable the SPHERES to perform relative navigation based on a 3D model of a target object.
A simple and rapid method for high-resolution visualization of single-ion tracks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omichi, Masaaki; Center for Collaborative Research, Anan National College of Technology, Anan, Tokushima 774-0017; Choi, Wookjin
2014-11-15
Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA)-N, N’-methylene bisacrylamide (MBAAm) blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic forcemore » microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.« less
The role of vestibular and support-tactile-proprioceptive inputs in visual-manual tracking
NASA Astrophysics Data System (ADS)
Kornilova, Ludmila; Naumov, Ivan; Glukhikh, Dmitriy; Khabarova, Ekaterina; Pavlova, Aleksandra; Ekimovskiy, Georgiy; Sagalovitch, Viktor; Smirnov, Yuriy; Kozlovskaya, Inesa
Sensorimotor disorders in weightlessness are caused by changes of functioning of gravity-dependent systems, first of all - vestibular and support. The question arises, what’s the role and the specific contribution of the support afferentation in the development of observed disorders. To determine the role and effects of vestibular, support, tactile and proprioceptive afferentation on characteristics of visual-manual tracking (VMT) we conducted a comparative analysis of the data obtained after prolonged spaceflight and in a model of weightlessness - horizontal “dry” immersion. Altogether we examined 16 Russian cosmonauts before and after prolonged spaceflights (129-215 days) and 30 subjects who stayed in immersion bath for 5-7 days to evaluate the state of the vestibular function (VF) using videooculography and characteristics of the visual-manual tracking (VMT) using electrooculography & joystick with biological visual feedback. Evaluation of the VF has shown that both after immersion and after prolonged spaceflight there were significant decrease of the static torsional otolith-cervical-ocular reflex (OCOR) and simultaneous significant increase of the dynamic vestibular-cervical-ocular reactions (VCOR) with a revealed negative correlation between parameters of the otoliths and canals reactions, as well as significant changes in accuracy of perception of the subjective visual vertical which correlated with changes in OCOR. Analyze of the VMT has shown that significant disorders of the visual tracking (VT) occurred from the beginning of the immersion up to 3-4 day after while in cosmonauts similar but much more pronounced oculomotor disorders and significant changes from the baseline were observed up to R+9 day postflight. Significant changes of the manual tracking (MT) were revealed only for gain and occurred on 1 and 3 days in immersion while after spaceflight such changes were observed up to R+5 day postflight. We found correlation between characteristics of the VT and MT, between characteristics of the VF and VT and no correlation between VF and MT. It was found that removal of the support and minimization of the proprioceptive afferentation has a greater impact upon accuracy of the VT then accuracy of the MT. Hand tracking accuracy was higher than the eyes for all subjects. The hand’ motor coordination was more stable to changes in support-proprioceptive afferentation then visual tracking. The observed changes in and after immersion are similar but less pronounced with changes observed on cosmonauts after prolonged spaceflight. Keywords: visual-manual tracking, vestibular function, weightlessness, immersion.
Research on target tracking in coal mine based on optical flow method
NASA Astrophysics Data System (ADS)
Xue, Hongye; Xiao, Qingwei
2015-03-01
To recognize, track and count the bolting machine in coal mine video images, a real-time target tracking method based on the Lucas-Kanade sparse optical flow is proposed in this paper. In the method, we judge whether the moving target deviate from its trajectory, predicate and correct the position of the moving target. The method solves the problem of failure to track the target or lose the target because of the weak light, uneven illumination and blocking. Using the VC++ platform and Opencv lib we complete the recognition and tracking. The validity of the method is verified by the result of the experiment.
A Novel Loss Recovery and Tracking Scheme for Maneuvering Target in Hybrid WSNs.
Qian, Hanwang; Fu, Pengcheng; Li, Baoqing; Liu, Jianpo; Yuan, Xiaobing
2018-01-25
Tracking a mobile target, which aims to timely monitor the invasion of specific target, is one of the most prominent applications in wireless sensor networks (WSNs). Traditional tracking methods in WSNs only based on static sensor nodes (SNs) have several critical problems. For example, to void the loss of mobile target, many SNs must be active to track the target in all possible directions, resulting in excessive energy consumption. Additionally, when entering coverage holes in the monitoring area, the mobile target may be missing and then its state is unknown during this period. To tackle these problems, in this paper, a few mobile sensor nodes (MNs) are introduced to cooperate with SNs to form a hybrid WSN due to their stronger abilities and less constrained energy. Then, we propose a valid target tracking scheme for hybrid WSNs to dynamically schedule the MNs and SNs. Moreover, a novel loss recovery mechanism is proposed to find the lost target and recover the tracking with fewer SNs awakened. Furthermore, to improve the robustness and accuracy of the recovery mechanism, an adaptive unscented Kalman filter (AUKF) algorithm is raised to dynamically adjust the process noise covariance. Simulation results demonstrate that our tracking scheme for maneuvering target in hybrid WSNs can not only track the target effectively even if the target is lost but also maintain an excellent accuracy and robustness with fewer activated nodes.
A Novel Loss Recovery and Tracking Scheme for Maneuvering Target in Hybrid WSNs
Liu, Jianpo; Yuan, Xiaobing
2018-01-01
Tracking a mobile target, which aims to timely monitor the invasion of specific target, is one of the most prominent applications in wireless sensor networks (WSNs). Traditional tracking methods in WSNs only based on static sensor nodes (SNs) have several critical problems. For example, to void the loss of mobile target, many SNs must be active to track the target in all possible directions, resulting in excessive energy consumption. Additionally, when entering coverage holes in the monitoring area, the mobile target may be missing and then its state is unknown during this period. To tackle these problems, in this paper, a few mobile sensor nodes (MNs) are introduced to cooperate with SNs to form a hybrid WSN due to their stronger abilities and less constrained energy. Then, we propose a valid target tracking scheme for hybrid WSNs to dynamically schedule the MNs and SNs. Moreover, a novel loss recovery mechanism is proposed to find the lost target and recover the tracking with fewer SNs awakened. Furthermore, to improve the robustness and accuracy of the recovery mechanism, an adaptive unscented Kalman filter (AUKF) algorithm is raised to dynamically adjust the process noise covariance. Simulation results demonstrate that our tracking scheme for maneuvering target in hybrid WSNs can not only track the target effectively even if the target is lost but also maintain an excellent accuracy and robustness with fewer activated nodes. PMID:29370103
1996-09-01
T1wo such modes have buen iinrylvni teted: a full target-track mode0 and a target- edge-track mode. Whun using thc full target-track mode the sonai ...direction is reversed. Rather than tracking across the target all the way to the opposing edge, however, the sonai is scanned only until three returns
Penalty Dynamic Programming Algorithm for Dim Targets Detection in Sensor Systems
Huang, Dayu; Xue, Anke; Guo, Yunfei
2012-01-01
In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations. PMID:22666074
Robust infrared targets tracking with covariance matrix representation
NASA Astrophysics Data System (ADS)
Cheng, Jian
2009-07-01
Robust infrared target tracking is an important and challenging research topic in many military and security applications, such as infrared imaging guidance, infrared reconnaissance, scene surveillance, etc. To effectively tackle the nonlinear and non-Gaussian state estimation problems, particle filtering is introduced to construct the theory framework of infrared target tracking. Under this framework, the observation probabilistic model is one of main factors for infrared targets tracking performance. In order to improve the tracking performance, covariance matrices are introduced to represent infrared targets with the multi-features. The observation probabilistic model can be constructed by computing the distance between the reference target's and the target samples' covariance matrix. Because the covariance matrix provides a natural tool for integrating multiple features, and is scale and illumination independent, target representation with covariance matrices can hold strong discriminating ability and robustness. Two experimental results demonstrate the proposed method is effective and robust for different infrared target tracking, such as the sensor ego-motion scene, and the sea-clutter scene.
Multisensor fusion for 3D target tracking using track-before-detect particle filter
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2015-05-01
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
TrackMate: An open and extensible platform for single-particle tracking.
Tinevez, Jean-Yves; Perry, Nick; Schindelin, Johannes; Hoopes, Genevieve M; Reynolds, Gregory D; Laplantine, Emmanuel; Bednarek, Sebastian Y; Shorte, Spencer L; Eliceiri, Kevin W
2017-02-15
We present TrackMate, an open source Fiji plugin for the automated, semi-automated, and manual tracking of single-particles. It offers a versatile and modular solution that works out of the box for end users, through a simple and intuitive user interface. It is also easily scriptable and adaptable, operating equally well on 1D over time, 2D over time, 3D over time, or other single and multi-channel image variants. TrackMate provides several visualization and analysis tools that aid in assessing the relevance of results. The utility of TrackMate is further enhanced through its ability to be readily customized to meet specific tracking problems. TrackMate is an extensible platform where developers can easily write their own detection, particle linking, visualization or analysis algorithms within the TrackMate environment. This evolving framework provides researchers with the opportunity to quickly develop and optimize new algorithms based on existing TrackMate modules without the need of having to write de novo user interfaces, including visualization, analysis and exporting tools. The current capabilities of TrackMate are presented in the context of three different biological problems. First, we perform Caenorhabditis-elegans lineage analysis to assess how light-induced damage during imaging impairs its early development. Our TrackMate-based lineage analysis indicates the lack of a cell-specific light-sensitive mechanism. Second, we investigate the recruitment of NEMO (NF-κB essential modulator) clusters in fibroblasts after stimulation by the cytokine IL-1 and show that photodamage can generate artifacts in the shape of TrackMate characterized movements that confuse motility analysis. Finally, we validate the use of TrackMate for quantitative lifetime analysis of clathrin-mediated endocytosis in plant cells. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.
Chen, Jian; Jia, Bingxi; Zhang, Kaixiang
2017-11-01
In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.
Distributed Peer-to-Peer Target Tracking in Wireless Sensor Networks
Wang, Xue; Wang, Sheng; Bi, Dao-Wei; Ma, Jun-Jie
2007-01-01
Target tracking is usually a challenging application for wireless sensor networks (WSNs) because it is always computation-intensive and requires real-time processing. This paper proposes a practical target tracking system based on the auto regressive moving average (ARMA) model in a distributed peer-to-peer (P2P) signal processing framework. In the proposed framework, wireless sensor nodes act as peers that perform target detection, feature extraction, classification and tracking, whereas target localization requires the collaboration between wireless sensor nodes for improving the accuracy and robustness. For carrying out target tracking under the constraints imposed by the limited capabilities of the wireless sensor nodes, some practically feasible algorithms, such as the ARMA model and the 2-D integer lifting wavelet transform, are adopted in single wireless sensor nodes due to their outstanding performance and light computational burden. Furthermore, a progressive multi-view localization algorithm is proposed in distributed P2P signal processing framework considering the tradeoff between the accuracy and energy consumption. Finally, a real world target tracking experiment is illustrated. Results from experimental implementations have demonstrated that the proposed target tracking system based on a distributed P2P signal processing framework can make efficient use of scarce energy and communication resources and achieve target tracking successfully.
A Fast MEANSHIFT Algorithm-Based Target Tracking System
Sun, Jian
2012-01-01
Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397
Weighted feature selection criteria for visual servoing of a telerobot
NASA Technical Reports Server (NTRS)
Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.
1989-01-01
Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.
Learned filters for object detection in multi-object visual tracking
NASA Astrophysics Data System (ADS)
Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David
2016-05-01
We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
Visual tracking of da Vinci instruments for laparoscopic surgery
NASA Astrophysics Data System (ADS)
Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.
2014-03-01
Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.
Han, Bin; Xu, X. George; Chen, George T. Y.
2011-01-01
Purpose: Monte Carlo methods are used to simulate and optimize a time-resolved proton range telescope (TRRT) in localization of intrafractional and interfractional motions of lung tumor and in quantification of proton range variations. Methods: The Monte Carlo N-Particle eXtended (MCNPX) code with a particle tracking feature was employed to evaluate the TRRT performance, especially in visualizing and quantifying proton range variations during respiration. Protons of 230 MeV were tracked one by one as they pass through position detectors, patient 4DCT phantom, and finally scintillator detectors that measured residual ranges. The energy response of the scintillator telescope was investigated. Mass density and elemental composition of tissues were defined for 4DCT data. Results: Proton water equivalent length (WEL) was deduced by a reconstruction algorithm that incorporates linear proton track and lateral spatial discrimination to improve the image quality. 4DCT data for three patients were used to visualize and measure tumor motion and WEL variations. The tumor trajectories extracted from the WEL map were found to be within ∼1 mm agreement with direct 4DCT measurement. Quantitative WEL variation studies showed that the proton radiograph is a good representation of WEL changes from entrance to distal of the target. Conclusions:MCNPX simulation results showed that TRRT can accurately track the motion of the tumor and detect the WEL variations. Image quality was optimized by choosing proton energy, testing parameters of image reconstruction algorithm, and comparing to ground truth 4DCT. The future study will demonstrate the feasibility of using the time resolved proton radiography as an imaging tool for proton treatments of lung tumors. PMID:21626923
Singh, Niharika; Mishra, Ramesh Kumar
2015-01-01
Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain. PMID:25775184
Operator vision aids for space teleoperation assembly and servicing
NASA Technical Reports Server (NTRS)
Brooks, Thurston L.; Ince, Ilhan; Lee, Greg
1992-01-01
This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.
Kita-Matsuo, Hiroko; Barcova, Maria; Prigozhina, Natalie; Salomonis, Nathan; Wei, Karen; Jacot, Jeffrey G.; Nelson, Brandon; Spiering, Sean; Haverslag, René; Kim, Changsung; Talantova, Maria; Bajpai, Ruchi; Calzolari, Diego; Terskikh, Alexey; McCulloch, Andrew D.; Price, Jeffrey H.; Conklin, Bruce R.; Chen, H. S. Vincent; Mercola, Mark
2009-01-01
Background Developmental, physiological and tissue engineering studies critical to the development of successful myocardial regeneration therapies require new ways to effectively visualize and isolate large numbers of fluorescently labeled, functional cardiomyocytes. Methodology/Principal Findings Here we describe methods for the clonal expansion of engineered hESCs and make available a suite of lentiviral vectors for that combine Blasticidin, Neomycin and Puromycin resistance based drug selection of pure populations of stem cells and cardiomyocytes with ubiquitous or lineage-specific promoters that direct expression of fluorescent proteins to visualize and track cardiomyocytes and their progenitors. The phospho-glycerate kinase (PGK) promoter was used to ubiquitously direct expression of histone-2B fused eGFP and mCherry proteins to the nucleus to monitor DNA content and enable tracking of cell migration and lineage. Vectors with T/Brachyury and α-myosin heavy chain (αMHC) promoters targeted fluorescent or drug-resistance proteins to early mesoderm and cardiomyocytes. The drug selection protocol yielded 96% pure cardiomyocytes that could be cultured for over 4 months. Puromycin-selected cardiomyocytes exhibited a gene expression profile similar to that of adult human cardiomyocytes and generated force and action potentials consistent with normal fetal cardiomyocytes, documenting these parameters in hESC-derived cardiomyocytes and validating that the selected cells retained normal differentiation and function. Conclusion/Significance The protocols, vectors and gene expression data comprise tools to enhance cardiomyocyte production for large-scale applications. PMID:19352491
TU-D-202-03: Gating Is the Best ITV Killer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Low, D.
Respiratory motion has long been recognized as an important factor affecting the precision of radiotherapy. After the introduction of the 4D CT to visualize the respiratory motion in 3D, the internal target volume (ITV) has been widely adopted as simple method to take the motion into account in treatment planning and delivery. The ITV is generated as the union of the CTVs as the patient goes through the respiratory cycle. Many issues have been identified with the ITV. In this session three alternatives for the ITV will be discussed: 1) An alternative motion-inclusive approach with better imaging and smaller margins,more » called mid-position CT. 2) The tracking approach and 3) The gating approach. The following topics will be addressed by Marcel van Herk (“Is ITV the correct motion encompassing strategy”): Magnitude of respiratory motion, effect of motion on radiotherapy, motion encompassing strategies, and software solutions to assist in motion encompassing strategies. Then Paul Keall (“Make margins simple: Use real-time target tracking”) will discuss tracking with: clinical drivers for tracking, current clinical status of tumor tracking, future tumor tracking technology, and margin margin challenges with and without tracking. Finally Daniel Low will discuss gating (“Gating is the best ITV killer”): why ITV in the first place, requirements for planning, requirements at the machine, benefits and costs. The session will end with a discussion and live demo of motion simulation software to illustrate the issues and explain the relative benefit and appropriate uses for the three methods. Learning Objectives: Explain the 4D imaging and treatment planning process. Summarize the various approaches to deal with respiratory motion during radiotherapy Discuss the tradeoffs involved when choosing one of the three discussed approaches. Explain in which situation each method is the best choice Research is partly funded by Elekta Oncology Systems and the Dutch Cancer Foundation; M. van Herk, Part of the research was funded by Elekta Oncology Systems and the Dutch Cancer Foundation.« less
Automatic tracking of cells for video microscopy in patch clamp experiments
2014-01-01
Background Visualisation of neurons labeled with fluorescent proteins or compounds generally require exposure to intense light for a relatively long period of time, often leading to bleaching of the fluorescent probe and photodamage of the tissue. Here we created a technique to drastically shorten light exposure and improve the targeting of fluorescent labeled cells that is specially useful for patch-clamp recordings. We applied image tracking and mask overlay to reduce the time of fluorescence exposure and minimise mistakes when identifying neurons. Methods Neurons are first identified according to visual criteria (e.g. fluorescence protein expression, shape, viability etc.) and a transmission microscopy image Differential Interference Contrast (DIC) or Dodt contrast containing the cell used as a reference for the tracking algorithm. A fluorescence image can also be acquired later to be used as a mask (that can be overlaid on the target during live transmission video). As patch-clamp experiments require translating the microscope stage, we used pattern matching to track reference neurons in order to move the fluorescence mask to match the new position of the objective in relation to the sample. For the image processing we used the Open Source Computer Vision (OpenCV) library, including the Speeded-Up Robust Features (SURF) for tracking cells. The dataset of images (n = 720) was analyzed under normal conditions of acquisition and with influence of noise (defocusing and brightness). Results We validated the method in dissociated neuronal cultures and fresh brain slices expressing Enhanced Yellow Fluorescent Protein (eYFP) or Tandem Dimer Tomato (tdTomato) proteins, which considerably decreased the exposure to fluorescence excitation, thereby minimising photodamage. We also show that the neuron tracking can be used in differential interference contrast or Dodt contrast microscopy. Conclusion The techniques of digital image processing used in this work are an important addition to the set of microscopy tools used in modern electrophysiology, specially in experiments with neuron cultures and brain slices. PMID:24946774
Automatic tracking of cells for video microscopy in patch clamp experiments.
Peixoto, Helton M; Munguba, Hermany; Cruz, Rossana M S; Guerreiro, Ana M G; Leao, Richardson N
2014-06-20
Visualisation of neurons labeled with fluorescent proteins or compounds generally require exposure to intense light for a relatively long period of time, often leading to bleaching of the fluorescent probe and photodamage of the tissue. Here we created a technique to drastically shorten light exposure and improve the targeting of fluorescent labeled cells that is specially useful for patch-clamp recordings. We applied image tracking and mask overlay to reduce the time of fluorescence exposure and minimise mistakes when identifying neurons. Neurons are first identified according to visual criteria (e.g. fluorescence protein expression, shape, viability etc.) and a transmission microscopy image Differential Interference Contrast (DIC) or Dodt contrast containing the cell used as a reference for the tracking algorithm. A fluorescence image can also be acquired later to be used as a mask (that can be overlaid on the target during live transmission video). As patch-clamp experiments require translating the microscope stage, we used pattern matching to track reference neurons in order to move the fluorescence mask to match the new position of the objective in relation to the sample. For the image processing we used the Open Source Computer Vision (OpenCV) library, including the Speeded-Up Robust Features (SURF) for tracking cells. The dataset of images (n = 720) was analyzed under normal conditions of acquisition and with influence of noise (defocusing and brightness). We validated the method in dissociated neuronal cultures and fresh brain slices expressing Enhanced Yellow Fluorescent Protein (eYFP) or Tandem Dimer Tomato (tdTomato) proteins, which considerably decreased the exposure to fluorescence excitation, thereby minimising photodamage. We also show that the neuron tracking can be used in differential interference contrast or Dodt contrast microscopy. The techniques of digital image processing used in this work are an important addition to the set of microscopy tools used in modern electrophysiology, specially in experiments with neuron cultures and brain slices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, B.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Low, D.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berbeco, R.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keall, P.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
Robust Target Tracking with Multi-Static Sensors under Insufficient TDOA Information.
Shin, Hyunhak; Ku, Bonhwa; Nelson, Jill K; Ko, Hanseok
2018-05-08
This paper focuses on underwater target tracking based on a multi-static sonar network composed of passive sonobuoys and an active ping. In the multi-static sonar network, the location of the target can be estimated using TDOA (Time Difference of Arrival) measurements. However, since the sensor network may obtain insufficient and inaccurate TDOA measurements due to ambient noise and other harsh underwater conditions, target tracking performance can be significantly degraded. We propose a robust target tracking algorithm designed to operate in such a scenario. First, track management with track splitting is applied to reduce performance degradation caused by insufficient measurements. Second, a target location is estimated by a fusion of multiple TDOA measurements using a Gaussian Mixture Model (GMM). In addition, the target trajectory is refined by conducting a stack-based data association method based on multiple-frames measurements in order to more accurately estimate target trajectory. The effectiveness of the proposed method is verified through simulations.
NASA Astrophysics Data System (ADS)
Zhang, Dong-Hai; Chen, Yan-Ling; Wang, Guo-Rong; Li, Wang-Dong; Wang, Qing; Yao, Ji-Jie; Zhou, Jian-Guo; Li, Rong; Li, Jun-Sheng; Li, Hui-Ling
2015-01-01
The forward-backward multiplicity and correlations of a target evaporated fragment (black track particle) and target recoiled proton (grey track particle) emitted from 150 A MeV 4He, 290 A MeV 12C, 400 A MeV 12C, 400 A MeV 20Ne and 500 A MeV 56Fe induced different types of nuclear emulsion target interactions are investigated. It is found that the forward and backward averaged multiplicity of a grey, black and heavily ionized track particle increases with the increase of the target size. The averaged multiplicity of a forward black track particle, backward black track particle, and backward grey track particle do not depend on the projectile size and energy, but the averaged multiplicity of a forward grey track particle increases with an increase of projectile size and energy. The backward grey track particle multiplicity distribution follows an exponential decay law and the decay constant decreases with an increase of target size. The backward-forward multiplicity correlations follow linear law which is independent of the projectile size and energy, and the saturation effect is observed in some heavy target data sets.
Data fusion for target tracking and classification with wireless sensor network
NASA Astrophysics Data System (ADS)
Pannetier, Benjamin; Doumerc, Robin; Moras, Julien; Dezert, Jean; Canevet, Loic
2016-10-01
In this paper, we address the problem of multiple ground target tracking and classification with information obtained from a unattended wireless sensor network. A multiple target tracking (MTT) algorithm, taking into account road and vegetation information, is proposed based on a centralized architecture. One of the key issue is how to adapt classical MTT approach to satisfy embedded processing. Based on track statistics, the classification algorithm uses estimated location, velocity and acceleration to help to classify targets. The algorithms enables tracking human and vehicles driving both on and off road. We integrate road or trail width and vegetation cover, as constraints in target motion models to improve performance of tracking under constraint with classification fusion. Our algorithm also presents different dynamic models, to palliate the maneuvers of targets. The tracking and classification algorithms are integrated into an operational platform (the fusion node). In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).
Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning
NASA Astrophysics Data System (ADS)
Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.
2018-04-01
At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.
Bury, Belinda; Tiggemann, Marika; Slater, Amy
2016-03-01
Globally there is increasing advocacy for the implementation of laws requiring disclaimer labels to be attached to media images that have been digitally altered, with the goal of reducing the known negative effects of exposure to unrealistic thin ideal imagery for women. The current study used eye tracking technology to establish how digital alteration disclaimer labels affect women's visual attention to fashion magazine advertisements, and the interrelationship with body dissatisfaction and state appearance comparison. Participants were 120 female undergraduate students who viewed four thin ideal advertisements with either no disclaimer, a generic disclaimer, or a more detailed specific disclaimer. It was found that women did attend to the disclaimers. Specifically worded disclaimers directed visual attention towards target body areas, which resulted in increased body dissatisfaction, while state appearance comparison predicted increased body dissatisfaction. Further research is imperative to provide guidance on the most effective use of disclaimer labels. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gao, Han; Li, Jingwen
2014-06-19
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.
Gao, Han; Li, Jingwen
2014-01-01
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640
Campos, Andre N.; Souza, Efren L.; Nakamura, Fabiola G.; Nakamura, Eduardo F.; Rodrigues, Joel J. P. C.
2012-01-01
Target tracking is an important application of wireless sensor networks. The networks' ability to locate and track an object is directed linked to the nodes' ability to locate themselves. Consequently, localization systems are essential for target tracking applications. In addition, sensor networks are often deployed in remote or hostile environments. Therefore, density control algorithms are used to increase network lifetime while maintaining its sensing capabilities. In this work, we analyze the impact of localization algorithms (RPE and DPE) and density control algorithms (GAF, A3 and OGDC) on target tracking applications. We adapt the density control algorithms to address the k-coverage problem. In addition, we analyze the impact of network density, residual integration with density control, and k-coverage on both target tracking accuracy and network lifetime. Our results show that DPE is a better choice for target tracking applications than RPE. Moreover, among the evaluated density control algorithms, OGDC is the best option among the three. Although the choice of the density control algorithm has little impact on the tracking precision, OGDC outperforms GAF and A3 in terms of tracking time. PMID:22969329
Attentional Resources in Visual Tracking through Occlusion: The High-Beams Effect
ERIC Educational Resources Information Center
Flombaum, Jonathan I.; Scholl, Brian J.; Pylyshyn, Zenon W.
2008-01-01
A considerable amount of research has uncovered heuristics that the visual system employs to keep track of objects through periods of occlusion. Relatively little work, by comparison, has investigated the online resources that support this processing. We explored how attention is distributed when featurally identical objects become occluded during…
1988-12-01
tracking task reveals the magnitude Akitrihm. Spare. and Environmental Medicine • December. I$ II I ANTIDOTE EFFECTS--PEN ETAR ET AL. and duration of the... marihuana on dynamic visual acu- blood pressure following the combination of 2-PAM Cl ity: I. Threshold measurements. Perception Psychophys. 1975
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
Human Mobility Monitoring in Very Low Resolution Visual Sensor Network
Bo Bo, Nyan; Deboeverie, Francis; Eldib, Mohamed; Guan, Junzhi; Xie, Xingzhe; Niño, Jorge; Van Haerenborgh, Dirk; Slembrouck, Maarten; Van de Velde, Samuel; Steendam, Heidi; Veelaert, Peter; Kleihorst, Richard; Aghajan, Hamid; Philips, Wilfried
2014-01-01
This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 × 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics. PMID:25375754
Automation trust and attention allocation in multitasking workspace.
Karpinsky, Nicole D; Chancey, Eric T; Palmer, Dakota B; Yamani, Yusuke
2018-07-01
Previous research suggests that operators with high workload can distrust and then poorly monitor automation, which has been generally inferred from automation dependence behaviors. To test automation monitoring more directly, the current study measured operators' visual attention allocation, workload, and trust toward imperfect automation in a dynamic multitasking environment. Participants concurrently performed a manual tracking task with two levels of difficulty and a system monitoring task assisted by an unreliable signaling system. Eye movement data indicate that operators allocate less visual attention to monitor automation when the tracking task is more difficult. Participants reported reduced levels of trust toward the signaling system when the tracking task demanded more focused visual attention. Analyses revealed that trust mediated the relationship between the load of the tracking task and attention allocation in Experiment 1, an effect that was not replicated in Experiment 2. Results imply a complex process underlying task load, visual attention allocation, and automation trust during multitasking. Automation designers should consider operators' task load in multitasking workspaces to avoid reduced automation monitoring and distrust toward imperfect signaling systems. Copyright © 2018. Published by Elsevier Ltd.
Visual tracking using neuromorphic asynchronous event-based cameras.
Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad
2015-04-01
This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
Li, Miao; Li, Jun; Zhou, Yiyu
2015-12-08
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts-MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing.
Li, Miao; Li, Jun; Zhou, Yiyu
2015-01-01
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts—MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing. PMID:26670234
The role of vision in odor-plume tracking by walking and flying insects.
Willis, Mark A; Avondet, Jennifer L; Zheng, Elizabeth
2011-12-15
The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available.
The role of vision in odor-plume tracking by walking and flying insects
Willis, Mark A.; Avondet, Jennifer L.; Zheng, Elizabeth
2011-01-01
SUMMARY The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available. PMID:22116754
Optical countermeasures against CLOS weapon systems
NASA Astrophysics Data System (ADS)
Toet, Alexander; Benoist, Koen W.; van Lingen, Joost N. J.; Schleijpen, H. Ric M. A.
2013-10-01
There are many weapon systems in which a human operator acquires a target, tracks it and designates it. Optical countermeasures against this type of systems deny the operator the possibility to fulfill this visual task. We describe the different effects that result from stimulation of the human visual system with high intensity (visible) light, and the associated potential operational impact. Of practical use are flash blindness, where an intense flash of light produces a temporary "blind-spot" in (part of) the visual field, flicker distraction, where strong intensity and/or color changes at a discomfortable frequency are produced, and disability glare where a source of light leads to contrast reduction. Hence there are three possibilities to disrupt the visual task of an operator with optical countermeasures such as flares or lasers or a combination of these; namely, by an intense flash of light, by an annoying light flicker or by a glare source. A variety of flares for this purpose is now available or under development: high intensity flash flares, continuous burning flares or strobe flares which have an oscillating intensity. The use of flare arrays seems particularly promising as an optical countermeasure. Lasers are particularly suited to interfere with human vision, because they can easily be varied in intensity, color and size, but they have to be directed at the (human) target, and issues like pointing and eye-safety have to be taken into account. Here we discuss the design issues and the operational impact of optical countermeasures against human operators.
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called “consonant bias”). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2nd and 4th Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4th Grade children, whereas 2nd graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4th graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading. PMID:24523917
Nonhuman Primate Studies to Advance Vision Science and Prevent Blindness.
Mustari, Michael J
2017-12-01
Most primate behavior is dependent on high acuity vision. Optimal visual performance in primates depends heavily upon frontally placed eyes, retinal specializations, and binocular vision. To see an object clearly its image must be placed on or near the fovea of each eye. The oculomotor system is responsible for maintaining precise eye alignment during fixation and generating eye movements to track moving targets. The visual system of nonhuman primates has a similar anatomical organization and functional capability to that of humans. This allows results obtained in nonhuman primates to be applied to humans. The visual and oculomotor systems of primates are immature at birth and sensitive to the quality of binocular visual and eye movement experience during the first months of life. Disruption of postnatal experience can lead to problems in eye alignment (strabismus), amblyopia, unsteady gaze (nystagmus), and defective eye movements. Recent studies in nonhuman primates have begun to discover the neural mechanisms associated with these conditions. In addition, genetic defects that target the retina can lead to blindness. A variety of approaches including gene therapy, stem cell treatment, neuroprosthetics, and optogenetics are currently being used to restore function associated with retinal diseases. Nonhuman primates often provide the best animal model for advancing fundamental knowledge and developing new treatments and cures for blinding diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of the National Academy of Sciences. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-01-01
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684