Upside-down: Perceived space affects object-based attention.
Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus
2017-07-01
Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Adaptive object tracking via both positive and negative models matching
NASA Astrophysics Data System (ADS)
Li, Shaomei; Gao, Chao; Wang, Yawen
2015-03-01
To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as abinary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm can not only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.
Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D
2017-10-01
This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.
Multiple Object Tracking Reveals Object-Based Grouping Interference in Children with ASD
ERIC Educational Resources Information Center
Van der Hallen, Ruth; Evers, Kris; de-Wit, Lee; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2018-01-01
The multiple object tracking (MOT) paradigm has proven its value in targeting a number of aspects of visual cognition. This study used MOT to investigate the effect of object-based grouping, both in children with and without autism spectrum disorder (ASD). A modified MOT task was administered to both groups, who had to track and distinguish four…
Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
2013-01-01
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods. PMID:23843739
Memory-based multiagent coevolution modeling for robust moving object tracking.
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
2013-01-01
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.
Tracking of multiple targets using online learning for reference model adaptation.
Pernkopf, Franz
2008-12-01
Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.
Bae, Seung-Hwan; Yoon, Kuk-Jin
2018-03-01
Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dongkyu, E-mail: akein@gist.ac.kr; Khalil, Hossam; Jo, Youngjoon
2016-06-28
An image-based tracking system using laser scanning vibrometer is developed for vibration measurement of a rotating object. The proposed system unlike a conventional one can be used where the position or velocity sensor such as an encoder cannot be attached to an object. An image processing algorithm is introduced to detect a landmark and laser beam based on their colors. Then, through using feedback control system, the laser beam can track a rotating object.
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.
2013-09-01
We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.
Adaptive learning compressive tracking based on Markov location prediction
NASA Astrophysics Data System (ADS)
Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan
2017-03-01
Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2011-02-01
The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.
A coarse-to-fine kernel matching approach for mean-shift based visual tracking
NASA Astrophysics Data System (ADS)
Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.
2009-03-01
Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.
Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
Tian, Yuan; Guan, Tao; Wang, Cheng
2010-01-01
To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278
Connection-based and object-based grouping in multiple-object tracking: A developmental study.
Van der Hallen, Ruth; Reusens, Julie; Evers, Kris; de-Wit, Lee; Wagemans, Johan
2018-03-30
Developmental research on Gestalt laws has previously revealed that, even as young as infancy, we are bound to group visual elements into unitary structures in accordance with a variety of organizational principles. Here, we focus on the developmental trajectory of both connection-based and object-based grouping, and investigate their impact on object formation in participants, aged 9-21 years old (N = 113), using a multiple-object tracking paradigm. Results reveal a main effect of both age and grouping type, indicating that 9- to 21-year-olds are sensitive to both connection-based and object-based grouping interference, and tracking ability increases with age. In addition to its importance for typical development, these results provide an informative baseline to understand clinical aberrations in this regard. Statement of contribution What is already known on this subject? The origin of the Gestalt principles is still an ongoing debate: Are they innate, learned over time, or both? Developmental research has revealed how each Gestalt principle has its own trajectory and unique relationship to visual experience. Both connectedness and object-based grouping play an important role in object formation during childhood. What does this study add? The study identifies how sensitivity to connectedness and object-based grouping evolves in individuals, aged 9-21 years old. Using multiple-object tracking, results reveal that the ability to track multiple objects increases with age. These results provide an informative baseline to understand clinical aberrations in different types of grouping. © 2018 The Authors. British Journal of Developmental Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
Visual tracking using objectness-bounding box regression and correlation filters
NASA Astrophysics Data System (ADS)
Mbelwa, Jimmy T.; Zhao, Qingjie; Lu, Yao; Wang, Fasheng; Mbise, Mercy
2018-03-01
Visual tracking is a fundamental problem in computer vision with extensive application domains in surveillance and intelligent systems. Recently, correlation filter-based tracking methods have shown a great achievement in terms of robustness, accuracy, and speed. However, such methods have a problem of dealing with fast motion (FM), motion blur (MB), illumination variation (IV), and drifting caused by occlusion (OCC). To solve this problem, a tracking method that integrates objectness-bounding box regression (O-BBR) model and a scheme based on kernelized correlation filter (KCF) is proposed. The scheme based on KCF is used to improve the tracking performance of FM and MB. For handling drift problem caused by OCC and IV, we propose objectness proposals trained in bounding box regression as prior knowledge to provide candidates and background suppression. Finally, scheme KCF as a base tracker and O-BBR are fused to obtain a state of a target object. Extensive experimental comparisons of the developed tracking method with other state-of-the-art trackers are performed on some of the challenging video sequences. Experimental comparison results show that our proposed tracking method outperforms other state-of-the-art tracking methods in terms of effectiveness, accuracy, and robustness.
Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.
Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin
2018-06-22
Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.
Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen
2017-01-01
An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.
Compressed multi-block local binary pattern for object tracking
NASA Astrophysics Data System (ADS)
Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao
2018-04-01
Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.
A visual tracking method based on deep learning without online model updating
NASA Astrophysics Data System (ADS)
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
Qin, Lei; Snoussi, Hichem; Abdallah, Fahed
2014-01-01
We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883
Lagrangian 3D tracking of fluorescent microscopic objects in motion
NASA Astrophysics Data System (ADS)
Darnige, T.; Figueroa-Morales, N.; Bohec, P.; Lindner, A.; Clément, E.
2017-05-01
We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.
Lagrangian 3D tracking of fluorescent microscopic objects in motion.
Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E
2017-05-01
We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.
Structure preserving clustering-object tracking via subgroup motion pattern segmentation
NASA Astrophysics Data System (ADS)
Fan, Zheyi; Zhu, Yixuan; Jiang, Jiao; Weng, Shuqin; Liu, Zhiwen
2018-01-01
Tracking clustering objects with similar appearances simultaneously in collective scenes is a challenging task in the field of collective motion analysis. Recent work on clustering-object tracking often suffers from poor tracking accuracy and terrible real-time performance due to the neglect or the misjudgment of the motion differences among objects. To address this problem, we propose a subgroup motion pattern segmentation framework based on a multilayer clustering structure and establish spatial constraints only among objects in the same subgroup, which entails having consistent motion direction and close spatial position. In addition, the subgroup segmentation results are updated dynamically because crowd motion patterns are changeable and affected by objects' destinations and scene structures. The spatial structure information combined with the appearance similarity information is used in the structure preserving object tracking framework to track objects. Extensive experiments conducted on several datasets containing multiple real-world crowd scenes validate the accuracy and the robustness of the presented algorithm for tracking objects in collective scenes.
Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding
Li, Xin; Guo, Rui; Chen, Chao
2014-01-01
Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216
Nonstationary EO/IR Clutter Suppression and Dim Object Tracking
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Brown, A.; Brown, J.
2010-09-01
We develop and evaluate the performance of advanced algorithms which provide significantly improved capabilities for automated detection and tracking of ballistic and flying dim objects in the presence of highly structured intense clutter. Applications include ballistic missile early warning, midcourse tracking, trajectory prediction, and resident space object detection and tracking. The set of algorithms include, in particular, adaptive spatiotemporal clutter estimation-suppression and nonlinear filtering-based multiple-object track-before-detect. These algorithms are suitable for integration into geostationary, highly elliptical, or low earth orbit scanning or staring sensor suites, and are based on data-driven processing that adapts to real-world clutter backgrounds, including celestial, earth limb, or terrestrial clutter. In many scenarios of interest, e.g., for highly elliptic and, especially, low earth orbits, the resulting clutter is highly nonstationary, providing a significant challenge for clutter suppression to or below sensor noise levels, which is essential for dim object detection and tracking. We demonstrate the success of the developed algorithms using semi-synthetic and real data. In particular, our algorithms are shown to be capable of detecting and tracking point objects with signal-to-clutter levels down to 1/1000 and signal-to-noise levels down to 1/4.
Tracking target objects orbiting earth using satellite-based telescopes
De Vries, Willem H; Olivier, Scot S; Pertica, Alexander J
2014-10-14
A system for tracking objects that are in earth orbit via a constellation or network of satellites having imaging devices is provided. An object tracking system includes a ground controller and, for each satellite in the constellation, an onboard controller. The ground controller receives ephemeris information for a target object and directs that ephemeris information be transmitted to the satellites. Each onboard controller receives ephemeris information for a target object, collects images of the target object based on the expected location of the target object at an expected time, identifies actual locations of the target object from the collected images, and identifies a next expected location at a next expected time based on the identified actual locations of the target object. The onboard controller processes the collected image to identify the actual location of the target object and transmits the actual location information to the ground controller.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Multi-object detection and tracking technology based on hexagonal opto-electronic detector
NASA Astrophysics Data System (ADS)
Song, Yong; Hao, Qun; Li, Xiang
2008-02-01
A novel multi-object detection and tracking technology based on hexagonal opto-electronic detector is proposed, in which (1) a new hexagonal detector, which is composed of 6 linear CCDs, has been firstly developed to achieve the field of view of 360 degree, (2) to achieve the detection and tracking of multi-object with high speed, the object recognition criterions of Object Signal Width Criterion (OSWC) and Horizontal Scale Ratio Criterion (HSRC) are proposed. In this paper, Simulated Experiments have been carried out to verify the validity of the proposed technology, which show that the detection and tracking of multi-object can be achieved with high speed by using the proposed hexagonal detector and the criterions of OSWC and HSRC, indicating that the technology offers significant advantages in Photo-electric Detection, Computer Vision, Virtual Reality, Augment Reality, etc.
Computer-aided target tracking in motion analysis studies
NASA Astrophysics Data System (ADS)
Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.
1990-08-01
Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.
Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.
Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen
2015-04-01
In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms.
Long-term scale adaptive tracking with kernel correlation filters
NASA Astrophysics Data System (ADS)
Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui
2018-04-01
Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.
NASA Technical Reports Server (NTRS)
Lewis, Steven J.; Palacios, David M.
2013-01-01
This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).
A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking
Shafiee, Mohammad Javad; Azimifar, Zohreh; Wong, Alexander
2015-01-01
In this work, we introduce a deep-structured conditional random field (DS-CRF) model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering. PMID:26313943
Feature point based 3D tracking of multiple fish from multi-view images
Qian, Zhi-Ming
2017-01-01
A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966
Feature point based 3D tracking of multiple fish from multi-view images.
Qian, Zhi-Ming; Chen, Yan Qiu
2017-01-01
A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.
Real-time visual tracking of less textured three-dimensional objects on mobile platforms
NASA Astrophysics Data System (ADS)
Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il
2012-12-01
Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.
Object Tracking and Target Reacquisition Based on 3-D Range Data for Moving Vehicles
Lee, Jehoon; Lankton, Shawn; Tannenbaum, Allen
2013-01-01
In this paper, we propose an approach for tracking an object of interest based on 3-D range data. We employ particle filtering and active contours to simultaneously estimate the global motion of the object and its local deformations. The proposed algorithm takes advantage of range information to deal with the challenging (but common) situation in which the tracked object disappears from the image domain entirely and reappears later. To cope with this problem, a method based on principle component analysis (PCA) of shape information is proposed. In the proposed method, if the target disappears out of frame, shape similarity energy is used to detect target candidates that match a template shape learned online from previously observed frames. Thus, we require no a priori knowledge of the target’s shape. Experimental results show the practical applicability and robustness of the proposed algorithm in realistic tracking scenarios. PMID:21486717
Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong
2017-06-02
Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.
Hardware accelerator design for tracking in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil
2011-10-01
Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.
Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking
Qu, Shiru
2016-01-01
Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710
Enhanced online convolutional neural networks for object tracking
NASA Astrophysics Data System (ADS)
Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen
2018-04-01
In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.
Nonstationary EO/IR Clutter Suppression and Dim Object Tracking
2010-01-01
Brown, A., and Brown, J., Enhanced Algorithms for EO /IR Electronic Stabilization, Clutter Suppression, and Track - Before - Detect for Multiple Low...estimation-suppression and nonlinear filtering-based multiple-object track - before - detect . These algorithms are suitable for integration into...In such cases, it is imperative to develop efficient real or near-real time tracking before detection methods. This paper continues the work started
Fast object reconstruction in block-based compressive low-light-level imaging
NASA Astrophysics Data System (ADS)
Ke, Jun; Sui, Dong; Wei, Ping
2014-11-01
In this paper we propose a simply yet effective and efficient method for long-term object tracking. Different from traditional visual tracking method which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion problem. To summarize, our algorithm can be roughly decomposed in a initialization stage and a tracking stage. In the initialization stage, an offline classifier is trained to get the object appearance information in category level. When the video stream is coming, the pre-trained offline classifier is used for detecting the potential target and initializing the tracking stage. In the tracking stage, it consists of three parts which are online tracking part, offline tracking part and confidence judgment part. Online tracking part captures the specific target appearance information while detection part localizes the object based on the pre-trained offline classifier. Since there is no data dependence between online tracking and offline detection, these two parts are running in parallel to significantly improve the processing speed. A confidence selection mechanism is proposed to optimize the object location. Besides, we also propose a simple mechanism to judge the absence of the object. If the target is lost, the pre-trained offline classifier is utilized to re-initialize the whole algorithm as long as the target is re-located. During experiment, we evaluate our method on several challenging video sequences and demonstrate competitive results.
Tracking Algorithm of Multiple Pedestrians Based on Particle Filters in Video Sequences
Liu, Yun; Wang, Chuanxu; Zhang, Shujun; Cui, Xuehong
2016-01-01
Pedestrian tracking is a critical problem in the field of computer vision. Particle filters have been proven to be very useful in pedestrian tracking for nonlinear and non-Gaussian estimation problems. However, pedestrian tracking in complex environment is still facing many problems due to changes of pedestrian postures and scale, moving background, mutual occlusion, and presence of pedestrian. To surmount these difficulties, this paper presents tracking algorithm of multiple pedestrians based on particle filters in video sequences. The algorithm acquires confidence value of the object and the background through extracting a priori knowledge thus to achieve multipedestrian detection; it adopts color and texture features into particle filter to get better observation results and then automatically adjusts weight value of each feature according to current tracking environment. During the process of tracking, the algorithm processes severe occlusion condition to prevent drift and loss phenomena caused by object occlusion and associates detection results with particle state to propose discriminated method for object disappearance and emergence thus to achieve robust tracking of multiple pedestrians. Experimental verification and analysis in video sequences demonstrate that proposed algorithm improves the tracking performance and has better tracking results. PMID:27847514
Correlation and 3D-tracking of objects by pointing sensors
Griesmeyer, J. Michael
2017-04-04
A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.
Multiple objects tracking with HOGs matching in circular windows
NASA Astrophysics Data System (ADS)
Miramontes-Jaramillo, Daniel; Kober, Vitaly; Díaz-Ramírez, Víctor H.
2014-09-01
In recent years tracking applications with development of new technologies like smart TVs, Kinect, Google Glass and Oculus Rift become very important. When tracking uses a matching algorithm, a good prediction algorithm is required to reduce the search area for each object to be tracked as well as processing time. In this work, we analyze the performance of different tracking algorithms based on prediction and matching for a real-time tracking multiple objects. The used matching algorithm utilizes histograms of oriented gradients. It carries out matching in circular windows, and possesses rotation invariance and tolerance to viewpoint and scale changes. The proposed algorithm is implemented in a personal computer with GPU, and its performance is analyzed in terms of processing time in real scenarios. Such implementation takes advantage of current technologies and helps to process video sequences in real-time for tracking several objects at the same time.
Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning
NASA Astrophysics Data System (ADS)
Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.
2018-04-01
At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.
Studying visual attention using the multiple object tracking paradigm: A tutorial review.
Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus
2017-07-01
Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.
NASA Astrophysics Data System (ADS)
Torteeka, Peerapong; Gao, Peng-Qi; Shen, Ming; Guo, Xiao-Zhang; Yang, Da-Tao; Yu, Huan-Huan; Zhou, Wei-Ping; Zhao, You
2017-02-01
Although tracking with a passive optical telescope is a powerful technique for space debris observation, it is limited by its sensitivity to dynamic background noise. Traditionally, in the field of astronomy, static background subtraction based on a median image technique has been used to extract moving space objects prior to the tracking operation, as this is computationally efficient. The main disadvantage of this technique is that it is not robust to variable illumination conditions. In this article, we propose an approach for tracking small and dim space debris in the context of a dynamic background via one of the optical telescopes that is part of the space surveillance network project, named the Asia-Pacific ground-based Optical Space Observation System or APOSOS. The approach combines a fuzzy running Gaussian average for robust moving-object extraction with dim-target tracking using a particle-filter-based track-before-detect method. The performance of the proposed algorithm is experimentally evaluated, and the results show that the scheme achieves a satisfactory level of accuracy for space debris tracking.
Shape and texture fused recognition of flying targets
NASA Astrophysics Data System (ADS)
Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás
2011-06-01
This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).
Design and implementation of a vision-based hovering and feature tracking algorithm for a quadrotor
NASA Astrophysics Data System (ADS)
Lee, Y. H.; Chahl, J. S.
2016-10-01
This paper demonstrates an approach to the vision-based control of the unmanned quadrotors for hover and object tracking. The algorithms used the Speed Up Robust Features (SURF) algorithm to detect objects. The pose of the object in the image was then calculated in order to pass the pose information to the flight controller. Finally, the flight controller steered the quadrotor to approach the object based on the calculated pose data. The above processes was run using standard onboard resources found in the 3DR Solo quadrotor in an embedded computing environment. The obtained results showed that the algorithm behaved well during its missions, tracking and hovering, although there were significant latencies due to low CPU performance of the onboard image processing system.
NASA Astrophysics Data System (ADS)
Bouaynaya, N.; Schonfeld, Dan
2005-03-01
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects
2014-09-01
based laser systems can be limited by the effects of tumbling, extremely accurate Doppler measurement is possible using a doublet coherent laser ...Doublet pulse coherent laser radar for tracking of resident space objects Narasimha S. Prasad *1 , Van Rudd 2 , Scott Shald 2 , Stephan...Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S
Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua
2013-12-01
This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.
3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging
NASA Astrophysics Data System (ADS)
Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak
2017-10-01
Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.
Improved semi-supervised online boosting for object tracking
NASA Astrophysics Data System (ADS)
Li, Yicui; Qi, Lin; Tan, Shukun
2016-10-01
The advantage of an online semi-supervised boosting method which takes object tracking problem as a classification problem, is training a binary classifier from labeled and unlabeled examples. Appropriate object features are selected based on real time changes in the object. However, the online semi-supervised boosting method faces one key problem: The traditional self-training using the classification results to update the classifier itself, often leads to drifting or tracking failure, due to the accumulated error during each update of the tracker. To overcome the disadvantages of semi-supervised online boosting based on object tracking methods, the contribution of this paper is an improved online semi-supervised boosting method, in which the learning process is guided by positive (P) and negative (N) constraints, termed P-N constraints, which restrict the labeling of the unlabeled samples. First, we train the classification by an online semi-supervised boosting. Then, this classification is used to process the next frame. Finally, the classification is analyzed by the P-N constraints, which are used to verify if the labels of unlabeled data assigned by the classifier are in line with the assumptions made about positive and negative samples. The proposed algorithm can effectively improve the discriminative ability of the classifier and significantly alleviate the drifting problem in tracking applications. In the experiments, we demonstrate real-time tracking of our tracker on several challenging test sequences where our tracker outperforms other related on-line tracking methods and achieves promising tracking performance.
Good Features to Correlate for Visual Tracking
NASA Astrophysics Data System (ADS)
Gundogdu, Erhan; Alatan, A. Aydin
2018-05-01
During the recent years, correlation filters have shown dominant and spectacular results for visual object tracking. The types of the features that are employed in these family of trackers significantly affect the performance of visual tracking. The ultimate goal is to utilize robust features invariant to any kind of appearance change of the object, while predicting the object location as properly as in the case of no appearance change. As the deep learning based methods have emerged, the study of learning features for specific tasks has accelerated. For instance, discriminative visual tracking methods based on deep architectures have been studied with promising performance. Nevertheless, correlation filter based (CFB) trackers confine themselves to use the pre-trained networks which are trained for object classification problem. To this end, in this manuscript the problem of learning deep fully convolutional features for the CFB visual tracking is formulated. In order to learn the proposed model, a novel and efficient backpropagation algorithm is presented based on the loss function of the network. The proposed learning framework enables the network model to be flexible for a custom design. Moreover, it alleviates the dependency on the network trained for classification. Extensive performance analysis shows the efficacy of the proposed custom design in the CFB tracking framework. By fine-tuning the convolutional parts of a state-of-the-art network and integrating this model to a CFB tracker, which is the top performing one of VOT2016, 18% increase is achieved in terms of expected average overlap, and tracking failures are decreased by 25%, while maintaining the superiority over the state-of-the-art methods in OTB-2013 and OTB-2015 tracking datasets.
Textual and shape-based feature extraction and neuro-fuzzy classifier for nuclear track recognition
NASA Astrophysics Data System (ADS)
Khayat, Omid; Afarideh, Hossein
2013-04-01
Track counting algorithms as one of the fundamental principles of nuclear science have been emphasized in the recent years. Accurate measurement of nuclear tracks on solid-state nuclear track detectors is the aim of track counting systems. Commonly track counting systems comprise a hardware system for the task of imaging and software for analysing the track images. In this paper, a track recognition algorithm based on 12 defined textual and shape-based features and a neuro-fuzzy classifier is proposed. Features are defined so as to discern the tracks from the background and small objects. Then, according to the defined features, tracks are detected using a trained neuro-fuzzy system. Features and the classifier are finally validated via 100 Alpha track images and 40 training samples. It is shown that principle textual and shape-based features concomitantly yield a high rate of track detection compared with the single-feature based methods.
Online Object Tracking, Learning and Parsing with And-Or Graphs.
Wu, Tianfu; Lu, Yang; Zhu, Song-Chun
2017-12-01
This paper presents a method, called AOGTracker, for simultaneously tracking, learning and parsing (TLP) of unknown objects in video sequences with a hierarchical and compositional And-Or graph (AOG) representation. The TLP method is formulated in the Bayesian framework with a spatial and a temporal dynamic programming (DP) algorithms inferring object bounding boxes on-the-fly. During online learning, the AOG is discriminatively learned using latent SVM [1] to account for appearance (e.g., lighting and partial occlusion) and structural (e.g., different poses and viewpoints) variations of a tracked object, as well as distractors (e.g., similar objects) in background. Three key issues in online inference and learning are addressed: (i) maintaining purity of positive and negative examples collected online, (ii) controling model complexity in latent structure learning, and (iii) identifying critical moments to re-learn the structure of AOG based on its intrackability. The intrackability measures uncertainty of an AOG based on its score maps in a frame. In experiments, our AOGTracker is tested on two popular tracking benchmarks with the same parameter setting: the TB-100/50/CVPR2013 benchmarks , [3] , and the VOT benchmarks [4] -VOT 2013, 2014, 2015 and TIR2015 (thermal imagery tracking). In the former, our AOGTracker outperforms state-of-the-art tracking algorithms including two trackers based on deep convolutional network [5] , [6] . In the latter, our AOGTracker outperforms all other trackers in VOT2013 and is comparable to the state-of-the-art methods in VOT2014, 2015 and TIR2015.
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
Adaptive particle filter for robust visual tracking
NASA Astrophysics Data System (ADS)
Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai
2009-10-01
Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.
An object tracking method based on guided filter for night fusion image
NASA Astrophysics Data System (ADS)
Qian, Xiaoyan; Wang, Yuedong; Han, Lei
2016-01-01
Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.
Robust visual object tracking with interleaved segmentation
NASA Astrophysics Data System (ADS)
Abel, Peter; Kieritz, Hilke; Becker, Stefan; Arens, Michael
2017-10-01
In this paper we present a new approach for tracking non-rigid, deformable objects by means of merging an on-line boosting-based tracker and a fast foreground background segmentation. We extend an on-line boosting- based tracker, which uses axes-aligned bounding boxes with fixed aspect-ratio as tracking states. By constructing a confidence map from the on-line boosting-based tracker and unifying this map with a confidence map, which is obtained from a foreground background segmentation algorithm, we build a superior confidence map. For constructing a rough confidence map of a new frame based on on-line boosting, we employ the responses of the strong classifier as well as the single weak classifier responses that were built before during the updating step. This confidence map provides a rough estimation of the object's position and dimension. In order to refine this confidence map, we build a fine, pixel-wisely segmented confidence map and merge both maps together. Our segmentation method is color-histogram-based and provides a fine and fast image segmentation. By means of back-projection and the Bayes' rule, we obtain a confidence value for every pixel. The rough and the fine confidence maps are merged together by building an adaptively weighted sum of both maps. The weights are obtained by utilizing the variances of both confidence maps. Further, we apply morphological operators in the merged confidence map in order to reduce the noise. In the resulting map we estimate the object localization and dimension via continuous adaptive mean shift. Our approach provides a rotated rectangle as tracking states, which enables a more precise description of non-rigid, deformable objects than axes-aligned bounding boxes. We evaluate our tracker on the visual object tracking (VOT) benchmark dataset 2016.
Rodríguez-Canosa, Gonzalo; Giner, Jaime del Cerro; Barrientos, Antonio
2014-01-01
The detection and tracking of mobile objects (DATMO) is progressively gaining importance for security and surveillance applications. This article proposes a set of new algorithms and procedures for detecting and tracking mobile objects by robots that work collaboratively as part of a multirobot system. These surveillance algorithms are conceived of to work with data provided by long distance range sensors and are intended for highly reliable object detection in wide outdoor environments. Contrary to most common approaches, in which detection and tracking are done by an integrated procedure, the approach proposed here relies on a modular structure, in which detection and tracking are carried out independently, and the latter might accept input data from different detection algorithms. Two movement detection algorithms have been developed for the detection of dynamic objects by using both static and/or mobile robots. The solution to the overall problem is based on the use of a Kalman filter to predict the next state of each tracked object. Additionally, new tracking algorithms capable of combining dynamic objects lists coming from either one or various sources complete the solution. The complementary performance of the separated modular structure for detection and identification is evaluated and, finally, a selection of test examples discussed. PMID:24526305
Brockhoff, Alisa; Huff, Markus
2016-10-01
Multiple object tracking (MOT) plays a fundamental role in processing and interpreting dynamic environments. Regarding the type of information utilized by the observer, recent studies reported evidence for the use of object features in an automatic, low- level manner. By introducing a novel paradigm that allowed us to combine tracking with a noninterfering top-down task, we tested whether a voluntary component can regulate the deployment of attention to task-relevant features in a selective manner. In four experiments we found conclusive evidence for a task-driven selection mechanism that guides attention during tracking: The observers were able to ignore or prioritize distinct objects. They marked the distinct (cued) object (target/distractor) more or less often than other objects of the same type (targets /distractors)-but only when they had received an identification task that required them to actively process object features (cues) during tracking. These effects are discussed with regard to existing theoretical approaches to attentive tracking, gaze-cue usability as well as attentional readiness, a term that originally stems from research on attention capture and visual search. Our findings indicate that existing theories of MOT need to be adjusted to allow for flexible top-down, voluntary processing during tracking.
Kernelized correlation tracking with long-term motion cues
NASA Astrophysics Data System (ADS)
Lv, Yunqiu; Liu, Kai; Cheng, Fei
2018-04-01
Robust object tracking is a challenging task in computer vision due to interruptions such as deformation, fast motion and especially, occlusion of tracked object. When occlusions occur, image data will be unreliable and is insufficient for the tracker to depict the object of interest. Therefore, most trackers are prone to fail under occlusion. In this paper, an occlusion judgement and handling method based on segmentation of the target is proposed. If the target is occluded, the speed and direction of it must be different from the objects occluding it. Hence, the value of motion features are emphasized. Considering the efficiency and robustness of Kernelized Correlation Filter Tracking (KCF), it is adopted as a pre-tracker to obtain a predicted position of the target. By analyzing long-term motion cues of objects around this position, the tracked object is labelled. Hence, occlusion could be detected easily. Experimental results suggest that our tracker achieves a favorable performance and effectively handles occlusion and drifting problems.
NASA Astrophysics Data System (ADS)
Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min
2016-01-01
This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.
Visual object tracking by correlation filters and online learning
NASA Astrophysics Data System (ADS)
Zhang, Xin; Xia, Gui-Song; Lu, Qikai; Shen, Weiming; Zhang, Liangpei
2018-06-01
Due to the complexity of background scenarios and the variation of target appearance, it is difficult to achieve high accuracy and fast speed for object tracking. Currently, correlation filters based trackers (CFTs) show promising performance in object tracking. The CFTs estimate the target's position by correlation filters with different kinds of features. However, most of CFTs can hardly re-detect the target in the case of long-term tracking drifts. In this paper, a feature integration object tracker named correlation filters and online learning (CFOL) is proposed. CFOL estimates the target's position and its corresponding correlation score using the same discriminative correlation filter with multi-features. To reduce tracking drifts, a new sampling and updating strategy for online learning is proposed. Experiments conducted on 51 image sequences demonstrate that the proposed algorithm is superior to the state-of-the-art approaches.
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Tso, Kam S.; Litwin, Todd E.; Hayati, Samad A.; Bon, Bruce B.
1991-01-01
Experimental robotic system semiautomatically grasps rotating object, stops rotation, and pulls object to rest in fixture. Based on combination of advanced techniques for sensing and control, constructed to test concepts for robotic recapture of spinning artificial satellites. Potential terrestrial applications for technology developed with help of system includes tracking and grasping of industrial parts on conveyor belts, tracking of vehicles and animals, and soft grasping of moving objects in general.
Multiple object tracking using the shortest path faster association algorithm.
Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.
Multiple Object Tracking Using the Shortest Path Faster Association Algorithm
Liu, Heping; Liu, Huaping; Yang, Bin
2014-01-01
To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322
High-performance object tracking and fixation with an online neural estimator.
Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian
2007-02-01
Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.
Quesada, Luis; León, Alejandro J
2012-10-01
Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.
Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.
Souto, David; Kerzel, Dirk
2013-02-06
Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.
NASA Astrophysics Data System (ADS)
Hartung, Christine; Spraul, Raphael; Schuchert, Tobias
2017-10-01
Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.
Multi-object tracking of human spermatozoa
NASA Astrophysics Data System (ADS)
Sørensen, Lauge; Østergaard, Jakob; Johansen, Peter; de Bruijne, Marleen
2008-03-01
We propose a system for tracking of human spermatozoa in phase-contrast microscopy image sequences. One of the main aims of a computer-aided sperm analysis (CASA) system is to automatically assess sperm quality based on spermatozoa motility variables. In our case, the problem of assessing sperm quality is cast as a multi-object tracking problem, where the objects being tracked are the spermatozoa. The system combines a particle filter and Kalman filters for robust motion estimation of the spermatozoa tracks. Further, the combinatorial aspect of assigning observations to labels in the particle filter is formulated as a linear assignment problem solved using the Hungarian algorithm on a rectangular cost matrix, making the algorithm capable of handling missing or spurious observations. The costs are calculated using hidden Markov models that express the plausibility of an observation being the next position in the track history of the particle labels. Observations are extracted using a scale-space blob detector utilizing the fact that the spermatozoa appear as bright blobs in a phase-contrast microscope. The output of the system is the complete motion track of each of the spermatozoa. Based on these tracks, different CASA motility variables can be computed, for example curvilinear velocity or straight-line velocity. The performance of the system is tested on three different phase-contrast image sequences of varying complexity, both by visual inspection of the estimated spermatozoa tracks and by measuring the mean squared error (MSE) between the estimated spermatozoa tracks and manually annotated tracks, showing good agreement.
Multi-Complementary Model for Long-Term Tracking
Zhang, Deng; Zhang, Junchang; Xia, Chenyang
2018-01-01
In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170
Additivity of Feature-Based and Symmetry-Based Grouping Effects in Multiple Object Tracking
Wang, Chundi; Zhang, Xuemin; Li, Yongna; Lyu, Chuang
2016-01-01
Multiple object tracking (MOT) is an attentional process wherein people track several moving targets among several distractors. Symmetry, an important indicator of regularity, is a general spatial pattern observed in natural and artificial scenes. According to the “laws of perceptual organization” proposed by Gestalt psychologists, regularity is a principle of perceptual grouping, such as similarity and closure. A great deal of research reported that feature-based similarity grouping (e.g., grouping based on color, size, or shape) among targets in MOT tasks can improve tracking performance. However, no additive feature-based grouping effects have been reported where the tracking objects had two or more features. “Additive effect” refers to a greater grouping effect produced by grouping based on multiple cues instead of one cue. Can spatial symmetry produce a similar grouping effect similar to that of feature similarity in MOT tasks? Are the grouping effects based on symmetry and feature similarity additive? This study includes four experiments to address these questions. The results of Experiments 1 and 2 demonstrated the automatic symmetry-based grouping effects. More importantly, an additive grouping effect of symmetry and feature similarity was observed in Experiments 3 and 4. Our findings indicate that symmetry can produce an enhanced grouping effect in MOT and facilitate the grouping effect based on color or shape similarity. The “where” and “what” pathways might have played an important role in the additive grouping effect. PMID:27199875
Godinez, William J; Rohr, Karl
2015-02-01
Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.
A Fast MEANSHIFT Algorithm-Based Target Tracking System
Sun, Jian
2012-01-01
Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397
An algorithm of adaptive scale object tracking in occlusion
NASA Astrophysics Data System (ADS)
Zhao, Congmei
2017-05-01
Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.
Detecting multiple moving objects in crowded environments with coherent motion regions
Cheriyadat, Anil M.; Radke, Richard J.
2013-06-11
Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
Accurate object tracking system by integrating texture and depth cues
NASA Astrophysics Data System (ADS)
Chen, Ju-Chin; Lin, Yu-Hang
2016-03-01
A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.
NASA Astrophysics Data System (ADS)
Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang
2018-01-01
Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.
Research on target tracking algorithm based on spatio-temporal context
NASA Astrophysics Data System (ADS)
Li, Baiping; Xu, Sanmei; Kang, Hongjuan
2017-07-01
In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.
Automatic feature-based grouping during multiple object tracking.
Erlikhman, Gennady; Keane, Brian P; Mettler, Everett; Horowitz, Todd S; Kellman, Philip J
2013-12-01
Contour interpolation automatically binds targets with distractors to impair multiple object tracking (Keane, Mettler, Tsoi, & Kellman, 2011). Is interpolation special in this regard or can other features produce the same effect? To address this question, we examined the influence of eight features on tracking: color, contrast polarity, orientation, size, shape, depth, interpolation, and a combination (shape, color, size). In each case, subjects tracked 4 of 8 objects that began as undifferentiated shapes, changed features as motion began (to enable grouping), and returned to their undifferentiated states before halting. We found that intertarget grouping improved performance for all feature types except orientation and interpolation (Experiment 1 and Experiment 2). Most importantly, target-distractor grouping impaired performance for color, size, shape, combination, and interpolation. The impairments were, at times, large (>15% decrement in accuracy) and occurred relative to a homogeneous condition in which all objects had the same features at each moment of a trial (Experiment 2), and relative to a "diversity" condition in which targets and distractors had different features at each moment (Experiment 3). We conclude that feature-based grouping occurs for a variety of features besides interpolation, even when irrelevant to task instructions and contrary to the task demands, suggesting that interpolation is not unique in promoting automatic grouping in tracking tasks. Our results also imply that various kinds of features are encoded automatically and in parallel during tracking.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Ye, Tao; Zhou, Fuqiang
2015-04-10
When imaged by detectors, space targets (including satellites and debris) and background stars have similar point-spread functions, and both objects appear to change as detectors track targets. Therefore, traditional tracking methods cannot separate targets from stars and cannot directly recognize targets in 2D images. Consequently, we propose an autonomous space target recognition and tracking approach using a star sensor technique and a Kalman filter (KF). A two-step method for subpixel-scale detection of star objects (including stars and targets) is developed, and the combination of the star sensor technique and a KF is used to track targets. The experimental results show that the proposed method is adequate for autonomously recognizing and tracking space targets.
Moving object detection and tracking in videos through turbulent medium
NASA Astrophysics Data System (ADS)
Halder, Kalyan Kumar; Tahtali, Murat; Anavatti, Sreenatha G.
2016-06-01
This paper addresses the problem of identifying and tracking moving objects in a video sequence having a time-varying background. This is a fundamental task in many computer vision applications, though a very challenging one because of turbulence that causes blurring and spatiotemporal movements of the background images. Our proposed approach involves two major steps. First, a moving object detection algorithm that deals with the detection of real motions by separating the turbulence-induced motions using a two-level thresholding technique is used. In the second step, a feature-based generalized regression neural network is applied to track the detected objects throughout the frames in the video sequence. The proposed approach uses the centroid and area features of the moving objects and creates the reference regions instantly by selecting the objects within a circle. Simulation experiments are carried out on several turbulence-degraded video sequences and comparisons with an earlier method confirms that the proposed approach provides a more effective tracking of the targets.
CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.
Bray, Mark-Anthony; Carpenter, Anne E
2015-11-04
Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.
Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.
Dzyubak, Oleksandr P; Ritman, Erik L
2011-01-01
The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.
Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis
Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan
2015-01-01
Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761
Visual tracking using neuromorphic asynchronous event-based cameras.
Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad
2015-04-01
This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.
Robust feedback zoom tracking for digital video surveillance.
Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong
2012-01-01
Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance.
A new user-assisted segmentation and tracking technique for an object-based video editing system
NASA Astrophysics Data System (ADS)
Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark
2004-03-01
This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.
Long-term object tracking combined offline with online learning
NASA Astrophysics Data System (ADS)
Hu, Mengjie; Wei, Zhenzhong; Zhang, Guangjun
2016-04-01
We propose a simple yet effective method for long-term object tracking. Different from the traditional visual tracking method, which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion. To summarize, our algorithm can be roughly decomposed into an initialization stage and a tracking stage. In the initialization stage, an offline detector is trained to get the object appearance information at the category level, which is used for detecting the potential target and initializing the tracking stage. The tracking stage consists of three modules: the online tracking module, detection module, and decision module. A pretrained detector is used for maintaining drift of the online tracker, while the online tracker is used for filtering out false positive detections. A confidence selection mechanism is proposed to optimize the object location based on the online tracker and detection. If the target is lost, the pretrained detector is utilized to reinitialize the whole algorithm when the target is relocated. During experiments, we evaluate our method on several challenging video sequences, and it demonstrates huge improvement compared with detection and online tracking only.
Tracking and people counting using Particle Filter Method
NASA Astrophysics Data System (ADS)
Sulistyaningrum, D. R.; Setiyono, B.; Rizky, M. S.
2018-03-01
In recent years, technology has developed quite rapidly, especially in the field of object tracking. Moreover, if the object under study is a person and the number of people a lot. The purpose of this research is to apply Particle Filter method for tracking and counting people in certain area. Tracking people will be rather difficult if there are some obstacles, one of which is occlusion. The stages of tracking and people counting scheme in this study include pre-processing, segmentation using Gaussian Mixture Model (GMM), tracking using particle filter, and counting based on centroid. The Particle Filter method uses the estimated motion included in the model used. The test results show that the tracking and people counting can be done well with an average accuracy of 89.33% and 77.33% respectively from six videos test data. In the process of tracking people, the results are good if there is partial occlusion and no occlusion
Tracking with occlusions via graph cuts.
Papadakis, Nicolas; Bugeau, Aurélie
2011-01-01
This work presents a new method for tracking and segmenting along time-interacting objects within an image sequence. One major contribution of the paper is the formalization of the notion of visible and occluded parts. For each object, we aim at tracking these two parts. Assuming that the velocity of each object is driven by a dynamical law, predictions can be used to guide the successive estimations. Separating these predicted areas into good and bad parts with respect to the final segmentation and representing the objects with their visible and occluded parts permit handling partial and complete occlusions. To achieve this tracking, a label is assigned to each object and an energy function representing the multilabel problem is minimized via a graph cuts optimization. This energy contains terms based on image intensities which enable segmenting and regularizing the visible parts of the objects. It also includes terms dedicated to the management of the occluded and disappearing areas, which are defined on the areas of prediction of the objects. The results on several challenging sequences prove the strength of the proposed approach.
ERIC Educational Resources Information Center
Betty, Paul
2009-01-01
Increasing use of screencast and Flash authoring software within libraries is resulting in "homegrown" library collections of digital learning objects and multimedia presentations. The author explores the use of Google Analytics to track usage statistics for interactive Shockwave Flash (.swf) files, the common file output for screencast and Flash…
Multi-view video segmentation and tracking for video surveillance
NASA Astrophysics Data System (ADS)
Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj
2009-05-01
Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.
Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera
NASA Astrophysics Data System (ADS)
Dziri, Aziz; Duranton, Marc; Chapuis, Roland
2016-07-01
Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.
AN/FSY-3 Space Fence System Support of Conjunction Assessment
NASA Astrophysics Data System (ADS)
Koltiska, M.; Du, H.; Prochoda, D.; Kelly, K.
2016-09-01
The Space Fence System is a ground-based space surveillance radar system designed to detect and track all objects in Low Earth Orbit the size of a softball or larger. The system detects many objects that are not currently in the catalog of satellites and space debris that is maintained by the US Air Force. In addition, it will also be capable of tracking many of the deep space objects in the catalog. By providing daily updates of the orbits of these new objects along with updates of most of the objects in the catalog, it will enhance Space Situational Awareness and significantly improve our ability to predict close approaches, aka conjunctions, of objects in space. With this additional capacity for tracking objects in space the Space Surveillance Network has significantly more resources for monitoring orbital debris, especially for debris that could collide with active satellites and other debris.
Object tracking with adaptive HOG detector and adaptive Rao-Blackwellised particle filter
NASA Astrophysics Data System (ADS)
Rosa, Stefano; Paleari, Marco; Ariano, Paolo; Bona, Basilio
2012-01-01
Scenarios for a manned mission to the Moon or Mars call for astronaut teams to be accompanied by semiautonomous robots. A prerequisite for human-robot interaction is the capability of successfully tracking humans and objects in the environment. In this paper we present a system for real-time visual object tracking in 2D images for mobile robotic systems. The proposed algorithm is able to specialize to individual objects and to adapt to substantial changes in illumination and object appearance during tracking. The algorithm is composed by two main blocks: a detector based on Histogram of Oriented Gradient (HOG) descriptors and linear Support Vector Machines (SVM), and a tracker which is implemented by an adaptive Rao-Blackwellised particle filter (RBPF). The SVM is re-trained online on new samples taken from previous predicted positions. We use the effective sample size to decide when the classifier needs to be re-trained. Position hypotheses for the tracked object are the result of a clustering procedure applied on the set of particles. The algorithm has been tested on challenging video sequences presenting strong changes in object appearance, illumination, and occlusion. Experimental tests show that the presented method is able to achieve near real-time performances with a precision of about 7 pixels on standard video sequences of dimensions 320 × 240.
Deterministic object tracking using Gaussian ringlet and directional edge features
NASA Astrophysics Data System (ADS)
Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.
2017-10-01
Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.
Tracking multiple objects is limited only by object spacing, not by speed, time, or capacity.
Franconeri, S L; Jonathan, S V; Scimeca, J M
2010-07-01
In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.
Tracking and nowcasting convective precipitation cells at European scale for transregional warnings
NASA Astrophysics Data System (ADS)
Meyer, Vera; Tüchler, Lukas
2013-04-01
A transregional overview of the current weather situation is considered as highly valuable information to assist forecasters as well as official authorities for disaster management in their decision making processes. The development of the European-wide radar composite OPERA enables for the first time a coherent object-oriented tracking and nowcasting of intense precipitation cells in real time at continental scale and at a resolution of 2 x 2 km² and 15 minutes. Recently, the object-oriented cell-tracking tool A-TNT (Austrian Thunderstorm Nowcasting Tool) has been developed at ZAMG. A-TNT utilizes the method of ec-TRAM [1]. It consists of two autonomously operating routines, which identify, track and nowcast radar- and lightning-cells separately. The two independent outputs are combined to a coherent storm monitoring and nowcasting in a final step. Within the framework of HAREN (Hazard Assessment based on Rainfall European Nowcasts), which is a project funded by the EC Directorate General for Humanitarian Aid and Civil Protection, A-TNT has been adapted to OPERA radar data. The objective of HAREN is the support of forecasters and official authorities in their decision-making processes concerning precipitation induced hazards with pan-European information. This study will present (1) the general performance of the object-oriented approach for thunderstorm tracking and nowcasting on continental scale giving insight into its current capabilities and limitations and (2) the utilization of object-oriented cell information for automated precipitation warnings carried out within the framework of HAREN. Data collected from April to October 2012 are used to assess the performance of cell-tracking based on radar data. Furthermore, the benefit of additional lightning information provided by the European Cooperation for Lightning Detection (EUCLID) for thunderstorm tracking and nowcasting will be summarized in selected analyses. REFERENCES: [1] Meyer, V. K., H. Höller, and H. D. Betz 2012: Automated thunderstorm tracking and nowcasting: utilization of three-dimensional lightning and radar data. Manuscript accepted for publication in ACPD.
Object Acquisition and Tracking for Space-Based Surveillance
1991-11-27
on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect , and can...smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
MOLECULAR TRACKING FECAL CONTAMINATION IN SURFACE WATERS: 16S RDNA VERSUS METAGENOMICS APPROACHES
Microbial source tracking methods need to be sensitive and exhibit temporal and geographic stability in order to provide meaningful data in field studies. The objective of this study was to use a combination of PCR-based methods to track cow fecal contamination in two watersheds....
Real-Time Visual Tracking through Fusion Features
Ruan, Yang; Wei, Zhenzhong
2016-01-01
Due to their high-speed, correlation filters for object tracking have begun to receive increasing attention. Traditional object trackers based on correlation filters typically use a single type of feature. In this paper, we attempt to integrate multiple feature types to improve the performance, and we propose a new DD-HOG fusion feature that consists of discriminative descriptors (DDs) and histograms of oriented gradients (HOG). However, fusion features as multi-vector descriptors cannot be directly used in prior correlation filters. To overcome this difficulty, we propose a multi-vector correlation filter (MVCF) that can directly convolve with a multi-vector descriptor to obtain a single-channel response that indicates the location of an object. Experiments on the CVPR2013 tracking benchmark with the evaluation of state-of-the-art trackers show the effectiveness and speed of the proposed method. Moreover, we show that our MVCF tracker, which uses the DD-HOG descriptor, outperforms the structure-preserving object tracker (SPOT) in multi-object tracking because of its high-speed and ability to address heavy occlusion. PMID:27347951
Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi
2008-10-01
Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.
2011-02-07
Sensor UGVs (SUGV) or Disruptor UGVs, depending on their payload. The SUGVs included vision, GPS/IMU, and LIDAR systems for identifying and tracking...employed by all the MAGICian research groups. Objects of interest were tracked using standard LIDAR and Computer Vision template-based feature...tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous Locali- zation and Mapping ( SLAM ). Our system contains
Vision-based measurement for rotational speed by improving Lucas-Kanade template tracking algorithm.
Guo, Jie; Zhu, Chang'an; Lu, Siliang; Zhang, Dashan; Zhang, Chunyu
2016-09-01
Rotational angle and speed are important parameters for condition monitoring and fault diagnosis of rotating machineries, and their measurement is useful in precision machining and early warning of faults. In this study, a novel vision-based measurement algorithm is proposed to complete this task. A high-speed camera is first used to capture the video of the rotational object. To extract the rotational angle, the template-based Lucas-Kanade algorithm is introduced to complete motion tracking by aligning the template image in the video sequence. Given the special case of nonplanar surface of the cylinder object, a nonlinear transformation is designed for modeling the rotation tracking. In spite of the unconventional and complex form, the transformation can realize angle extraction concisely with only one parameter. A simulation is then conducted to verify the tracking effect, and a practical tracking strategy is further proposed to track consecutively the video sequence. Based on the proposed algorithm, instantaneous rotational speed (IRS) can be measured accurately and efficiently. Finally, the effectiveness of the proposed algorithm is verified on a brushless direct current motor test rig through the comparison with results obtained by the microphone. Experimental results demonstrate that the proposed algorithm can extract accurately rotational angles and can measure IRS with the advantage of noncontact and effectiveness.
2018-01-01
Although the use of the surgical robot is rapidly expanding for various medical treatments, there still exist safety issues and concerns about robot-assisted surgeries due to limited vision through a laparoscope, which may cause compromised situation awareness and surgical errors requiring rapid emergency conversion to open surgery. To assist surgeon's situation awareness and preventive emergency response, this study proposes situation information guidance through a vision-based common algorithm architecture for automatic detection and tracking of intraoperative hemorrhage and surgical instruments. The proposed common architecture comprises the location of the object of interest using feature texture, morphological information, and the tracking of the object based on Kalman filter for robustness with reduced error. The average recall and precision of the instrument detection in four prostate surgery videos were 96% and 86%, and the accuracy of the hemorrhage detection in two prostate surgery videos was 98%. Results demonstrate the robustness of the automatic intraoperative object detection and tracking which can be used to enhance the surgeon's preventive state recognition during robot-assisted surgery. PMID:29854366
Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking
Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua
2014-01-01
To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252
Decentralized cooperative TOA/AOA target tracking for hierarchical wireless sensor networks.
Chen, Ying-Chih; Wen, Chih-Yu
2012-11-08
This paper proposes a distributed method for cooperative target tracking in hierarchical wireless sensor networks. The concept of leader-based information processing is conducted to achieve object positioning, considering a cluster-based network topology. Random timers and local information are applied to adaptively select a sub-cluster for the localization task. The proposed energy-efficient tracking algorithm allows each sub-cluster member to locally estimate the target position with a Bayesian filtering framework and a neural networking model, and further performs estimation fusion in the leader node with the covariance intersection algorithm. This paper evaluates the merits and trade-offs of the protocol design towards developing more efficient and practical algorithms for object position estimation.
Robust Feedback Zoom Tracking for Digital Video Surveillance
Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong
2012-01-01
Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called “trace curve”, which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance. PMID:22969388
Vision-based object detection and recognition system for intelligent vehicles
NASA Astrophysics Data System (ADS)
Ran, Bin; Liu, Henry X.; Martono, Wilfung
1999-01-01
Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.
NASA Astrophysics Data System (ADS)
Griffiths, D.; Boehm, J.
2018-05-01
With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.
Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin
2017-01-01
The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629
Kumar, Aditya; Shi, Ruijie; Kumar, Rajeeva; Dokucu, Mustafa
2013-04-09
Control system and method for controlling an integrated gasification combined cycle (IGCC) plant are provided. The system may include a controller coupled to a dynamic model of the plant to process a prediction of plant performance and determine a control strategy for the IGCC plant over a time horizon subject to plant constraints. The control strategy may include control functionality to meet a tracking objective and control functionality to meet an optimization objective. The control strategy may be configured to prioritize the tracking objective over the optimization objective based on a coordinate transformation, such as an orthogonal or quasi-orthogonal projection. A plurality of plant control knobs may be set in accordance with the control strategy to generate a sequence of coordinated multivariable control inputs to meet the tracking objective and the optimization objective subject to the prioritization resulting from the coordinate transformation.
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
NASA Astrophysics Data System (ADS)
Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen
2017-06-01
Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System
2016-01-01
This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-04-15
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-01-01
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505
Color Feature-Based Object Tracking through Particle Swarm Optimization with Improved Inertia Weight
Guo, Siqiu; Zhang, Tao; Song, Yulong
2018-01-01
This paper presents a particle swarm tracking algorithm with improved inertia weight based on color features. The weighted color histogram is used as the target feature to reduce the contribution of target edge pixels in the target feature, which makes the algorithm insensitive to the target non-rigid deformation, scale variation, and rotation. Meanwhile, the influence of partial obstruction on the description of target features is reduced. The particle swarm optimization algorithm can complete the multi-peak search, which can cope well with the object occlusion tracking problem. This means that the target is located precisely where the similarity function appears multi-peak. When the particle swarm optimization algorithm is applied to the object tracking, the inertia weight adjustment mechanism has some limitations. This paper presents an improved method. The concept of particle maturity is introduced to improve the inertia weight adjustment mechanism, which could adjust the inertia weight in time according to the different states of each particle in each generation. Experimental results show that our algorithm achieves state-of-the-art performance in a wide range of scenarios. PMID:29690610
Guo, Siqiu; Zhang, Tao; Song, Yulong; Qian, Feng
2018-04-23
This paper presents a particle swarm tracking algorithm with improved inertia weight based on color features. The weighted color histogram is used as the target feature to reduce the contribution of target edge pixels in the target feature, which makes the algorithm insensitive to the target non-rigid deformation, scale variation, and rotation. Meanwhile, the influence of partial obstruction on the description of target features is reduced. The particle swarm optimization algorithm can complete the multi-peak search, which can cope well with the object occlusion tracking problem. This means that the target is located precisely where the similarity function appears multi-peak. When the particle swarm optimization algorithm is applied to the object tracking, the inertia weight adjustment mechanism has some limitations. This paper presents an improved method. The concept of particle maturity is introduced to improve the inertia weight adjustment mechanism, which could adjust the inertia weight in time according to the different states of each particle in each generation. Experimental results show that our algorithm achieves state-of-the-art performance in a wide range of scenarios.
Study of moving object detecting and tracking algorithm for video surveillance system
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhang, Rongfu
2010-10-01
This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.
New color-based tracking algorithm for joints of the upper extremities
NASA Astrophysics Data System (ADS)
Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang
2007-11-01
To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.
Dynamic sensor management of dispersed and disparate sensors for tracking resident space objects
NASA Astrophysics Data System (ADS)
El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.
2008-04-01
Dynamic sensor management of dispersed and disparate sensors for space situational awareness presents daunting scientific and practical challenges as it requires optimal and accurate maintenance of all Resident Space Objects (RSOs) of interest. We demonstrate an approach to the space-based sensor management problem by extending a previously developed and tested sensor management objective function, the Posterior Expected Number of Targets (PENT), to disparate and dispersed sensors. This PENT extension together with observation models for various sensor platforms, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker provide a powerful tool for tackling this challenging problem. We demonstrate the approach using simulations for tracking RSOs by a Space Based Visible (SBV) sensor and ground based radars.
Method for targetless tracking subpixel in-plane movements.
Espinosa, Julian; Perez, Jorge; Ferrer, Belen; Mas, David
2015-09-01
We present a targetless motion tracking method for detecting planar movements with subpixel accuracy. This method is based on the computation and tracking of the intersection of two nonparallel straight-line segments in the image of a moving object in a scene. The method is simple and easy to implement because no complex structures have to be detected. It has been tested and validated using a lab experiment consisting of a vibrating object that was recorded with a high-speed camera working at 1000 fps. We managed to track displacements with an accuracy of hundredths of pixel or even of thousandths of pixel in the case of tracking harmonic vibrations. The method is widely applicable because it can be used for distance measuring amplitude and frequency of vibrations with a vision system.
NASA Astrophysics Data System (ADS)
Radkowski, Rafael; Holland, Stephen; Grandin, Robert
2018-04-01
This research addresses inspection location tracking in the field of nondestructive evaluation (NDE) using a computer vision technique to determine the position and orientation of typical NDE equipment in a test setup. The objective is the tracking accuracy for typical NDE equipment to facilitate automatic NDE data integration. Since the employed tracking technique relies on surface curvatures of an object of interest, the accuracy can be only experimentally determined. We work with flash-thermography and conducted an experiment in which we tracked a specimen and a thermography flash hood, measured the spatial relation between both, and used the relation as input to map thermography data onto a 3D model of the specimen. The results indicate an appropriate accuracy, however, unveiled calibration challenges.
Automatic Tracking Algorithm in Coaxial Near-Infrared Laser Ablation Endoscope for Fetus Surgery
NASA Astrophysics Data System (ADS)
Hu, Yan; Yamanaka, Noriaki; Masamune, Ken
2014-07-01
This article reports a stable vessel object tracking method for the treatment of twin-to-twin transfusion syndrome based on our previous 2 DOF endoscope. During the treatment of laser coagulation, it is necessary to focus on the exact position of the target object, however it moves by the mother's respiratory motion and still remains a challenge to obtain and track the position precisely. In this article, an algorithm which uses features from accelerated segment test (FAST) to extract the features and optical flow as the object tracking method, is proposed to deal with above problem. Further, we experimentally simulate the movement due to the mother's respiration, and the results of position errors and similarity verify the effectiveness of the proposed tracking algorithm for laser ablation endoscopy in-vitro and under water considering two influential factors. At average, the errors are about 10 pixels and the similarity over 0.92 are obtained in the experiments.
Automated multiple target detection and tracking in UAV videos
NASA Astrophysics Data System (ADS)
Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie
2010-04-01
In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.
Self-motion impairs multiple-object tracking.
Thomas, Laura E; Seiffert, Adriane E
2010-10-01
Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku
2017-01-01
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483
NASA Astrophysics Data System (ADS)
Kudryavtsev, Andrey V.; Laurent, Guillaume J.; Clévy, Cédric; Tamadazte, Brahim; Lutz, Philippe
2015-10-01
Microassembly is an innovative alternative to the microfabrication process of MOEMS, which is quite complex. It usually implies the use of microrobots controlled by an operator. The reliability of this approach has been already confirmed for micro-optical technologies. However, the characterization of assemblies has shown that the operator is the main source of inaccuracies in the teleoperated microassembly. Therefore, there is great interest in automating the microassembly process. One of the constraints of automation in microscale is the lack of high precision sensors capable to provide the full information about the object position. Thus, the usage of visual-based feedback represents a very promising approach allowing to automate the microassembly process. The purpose of this article is to characterize the techniques of object position estimation based on the visual data, i.e., visual tracking techniques from the ViSP library. These algorithms enables a 3-D object pose using a single view of the scene and the CAD model of the object. The performance of three main types of model-based trackers is analyzed and quantified: edge-based, texture-based and hybrid tracker. The problems of visual tracking in microscale are discussed. The control of the micromanipulation station used in the framework of our project is performed using a new Simulink block set. Experimental results are shown and demonstrate the possibility to obtain the repeatability below 1 µm.
Color Image Processing and Object Tracking System
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.
1996-01-01
This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.
The Role of Visual Working Memory in Attentive Tracking of Unique Objects
ERIC Educational Resources Information Center
Makovski, Tal; Jiang, Yuhong V.
2009-01-01
When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…
Connected Component Model for Multi-Object Tracking.
He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan
2016-08-01
In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.
Active illuminated space object imaging and tracking simulation
NASA Astrophysics Data System (ADS)
Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu
2016-10-01
Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.
Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S.; Rudd, Van; Shald, Scott; Sandford, Stephen; Dimarcantonio, Albert
2014-01-01
In this paper, the development of a long range ladar system known as ExoSPEAR at NASA Langley Research Center for tracking rapidly moving resident space objects is discussed. Based on 100 W, nanosecond class, near-IR laser, this ladar system with coherent detection technique is currently being investigated for short dwell time measurements of resident space objects (RSOs) in LEO and beyond for space surveillance applications. This unique ladar architecture is configured using a continuously agile doublet-pulse waveform scheme coupled to a closed-loop tracking and control loop approach to simultaneously achieve mm class range precision and mm/s velocity precision and hence obtain unprecedented track accuracies. Salient features of the design architecture followed by performance modeling and engagement simulations illustrating the dependence of range and velocity precision in LEO orbits on ladar parameters are presented. Estimated limits on detectable optical cross sections of RSOs in LEO orbits are discussed.
Visual tracking of da Vinci instruments for laparoscopic surgery
NASA Astrophysics Data System (ADS)
Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.
2014-03-01
Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.
Visual attention is required for multiple object tracking.
Tran, Annie; Hoffman, James E
2016-12-01
In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Zittersteijn, M.; Vananti, A.; Schildknecht, T.; Dolado Perez, J. C.; Martinot, V.
2016-11-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). The MTT problem quickly becomes an NP-hard combinatorial optimization problem. This means that the effort required to solve the MTT problem increases exponentially with the number of tracked objects. In an attempt to find an approximate solution of sufficient quality, several Population-Based Meta-Heuristic (PBMH) algorithms are implemented and tested on simulated optical measurements. These first results show that one of the tested algorithms, namely the Elitist Genetic Algorithm (EGA), consistently displays the desired behavior of finding good approximate solutions before reaching the optimum. The results further suggest that the algorithm possesses a polynomial time complexity, as the computation times are consistent with a polynomial model. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the association and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.
Siamese convolutional networks for tracking the spine motion
NASA Astrophysics Data System (ADS)
Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong
2017-09-01
Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.
Distributed multirobot sensing and tracking: a behavior-based approach
NASA Astrophysics Data System (ADS)
Parker, Lynne E.
1995-09-01
An important issue that arises in the automation of many large-scale surveillance and reconnaissance tasks is that of tracking the movements of (or maintaining passive contact with) objects navigating in a bounded area of interest. Oftentimes in these problems, the area to be monitored will move over time or will not permit fixed sensors, thus requiring a team of mobile sensors--or robots--to monitor the area collectively. In these situations, the robots must not only have mechanisms for determining how to track objects and how to fuse information from neighboring robots, but they must also have distributed control strategies for ensuring that the entire area of interest is continually covered to the greatest extent possible. This paper focuses on the distributed control issue by describing a proposed decentralized control mechanism that allows a team of robots to collectively track and monitor objects in an uncluttered area of interest. The approach is based upon an extension to the ALLIANCE behavior-based architecture that generalizes from the domain of loosely-coupled, independent applications to the domain of strongly cooperative applications, in which the action selection of a robot is dependent upon the actions selected by its teammates. We conclude the paper be describing our ongoing implementation of the proposed approach on a team of four mobile robots.
Robust object tracking techniques for vision-based 3D motion analysis applications
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.
2016-04-01
Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.
Liang, Zhongwei; Zhou, Liang; Liu, Xiaochu; Wang, Xiaogang
2014-01-01
It is obvious that tablet image tracking exerts a notable influence on the efficiency and reliability of high-speed drug mass production, and, simultaneously, it also emerges as a big difficult problem and targeted focus during production monitoring in recent years, due to the high similarity shape and random position distribution of those objectives to be searched for. For the purpose of tracking tablets accurately in random distribution, through using surface fitting approach and transitional vector determination, the calibrated surface of light intensity reflective energy can be established, describing the shape topology and topography details of objective tablet. On this basis, the mathematical properties of these established surfaces have been proposed, and thereafter artificial neural network (ANN) has been employed for classifying those moving targeted tablets by recognizing their different surface properties; therefore, the instantaneous coordinate positions of those drug tablets on one image frame can then be determined. By repeating identical pattern recognition on the next image frame, the real-time movements of objective tablet templates were successfully tracked in sequence. This paper provides reliable references and new research ideas for the real-time objective tracking in the case of drug production practices. PMID:25143781
Object tracking with stereo vision
NASA Technical Reports Server (NTRS)
Huber, Eric
1994-01-01
A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.
How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking
Thomas, Laura E.; Seiffert, Adriane E.
2011-01-01
Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259
Creating objective and measurable postgraduate year 1 residency graduation requirements.
Starosta, Kaitlin; Davis, Susan L; Kenney, Rachel M; Peters, Michael; To, Long; Kalus, James S
2017-03-15
The process of developing objective and measurable postgraduate year 1 (PGY1) residency graduation requirements and a progress tracking system is described. The PGY1 residency accreditation standard requires that programs establish criteria that must be met by residents for successful completion of the program (i.e., graduation requirements), which should presumably be aligned with helping residents to achieve the purpose of residency training. In addition, programs must track a resident's progress toward fulfillment of residency goals and objectives. Defining graduation requirements and establishing the process for tracking residents' progress are left up to the discretion of the residency program. To help standardize resident performance assessments, leaders of an academic medical center-based PGY1 residency program developed graduation requirement criteria that are objective, measurable, and linked back to residency goals and objectives. A system for tracking resident progress relative to quarterly progress targets was instituted. Leaders also developed a focused, on-the-spot skills assessment termed "the Thunderdome," which was designed for objective evaluation of direct patient care skills. Quarterly data on residents' progress are used to update and customize each resident's training plan. Implementation of this system allowed seamless linkage of the training plan, the progress tracking system, and the specified graduation requirement criteria. PGY1 residency requirements that are objective, that are measurable, and that attempt to identify what skills the resident must demonstrate in order to graduate from the program were developed for use in our residency program. A system for tracking the residents' progress by comparing residents' performance to predetermined quarterly benchmarks was developed. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Brandes, Susanne; Mokhtari, Zeinab; Essig, Fabian; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-02-01
Time-lapse microscopy is an important technique to study the dynamics of various biological processes. The labor-intensive manual analysis of microscopy videos is increasingly replaced by automated segmentation and tracking methods. These methods are often limited to certain cell morphologies and/or cell stainings. In this paper, we present an automated segmentation and tracking framework that does not have these restrictions. In particular, our framework handles highly variable cell shapes and does not rely on any cell stainings. Our segmentation approach is based on a combination of spatial and temporal image variations to detect moving cells in microscopy videos. This method yields a sensitivity of 99% and a precision of 95% in object detection. The tracking of cells consists of different steps, starting from single-cell tracking based on a nearest-neighbor-approach, detection of cell-cell interactions and splitting of cell clusters, and finally combining tracklets using methods from graph theory. The segmentation and tracking framework was applied to synthetic as well as experimental datasets with varying cell densities implying different numbers of cell-cell interactions. We established a validation framework to measure the performance of our tracking technique. The cell tracking accuracy was found to be >99% for all datasets indicating a high accuracy for connecting the detected cells between different time points. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ehrhart, Matthias; Lienhart, Werner
2017-09-01
The importance of automated prism tracking is increasingly triggered by the rising automation of total station measurements in machine control, monitoring and one-person operation. In this article we summarize and explain the different techniques that are used to coarsely search a prism, to precisely aim at a prism, and to identify whether the correct prism is tracked. Along with the state-of-the-art review, we discuss and experimentally evaluate possible improvements based on the image data of an additional wide-angle camera which is available for many total stations today. In cases in which the total station's fine aiming module loses the prism, the tracked object may still be visible to the wide-angle camera because of its larger field of view. The theodolite angles towards the target can then be derived from its image coordinates which facilitates a fast reacquisition of the prism. In experimental measurements we demonstrate that our image-based approach for the coarse target search is 4 to 10-times faster than conventional approaches.
Probabilistic multi-person localisation and tracking in image sequences
NASA Astrophysics Data System (ADS)
Klinger, T.; Rottensteiner, F.; Heipke, C.
2017-05-01
The localisation and tracking of persons in image sequences in commonly guided by recursive filters. Especially in a multi-object tracking environment, where mutual occlusions are inherent, the predictive model is prone to drift away from the actual target position when not taking context into account. Further, if the image-based observations are imprecise, the trajectory is prone to be updated towards a wrong position. In this work we address both these problems by using a new predictive model on the basis of Gaussian Process Regression, and by using generic object detection, as well as instance-specific classification, for refined localisation. The predictive model takes into account the motion of every tracked pedestrian in the scene and the prediction is executed with respect to the velocities of neighbouring persons. In contrast to existing methods our approach uses a Dynamic Bayesian Network in which the state vector of a recursive Bayes filter, as well as the location of the tracked object in the image, are modelled as unknowns. This allows the detection to be corrected before it is incorporated into the recursive filter. Our method is evaluated on a publicly available benchmark dataset and outperforms related methods in terms of geometric precision and tracking accuracy.
The semantic category-based grouping in the Multiple Identity Tracking task.
Wei, Liuqing; Zhang, Xuemin; Li, Zhen; Liu, Jingyao
2018-01-01
In the Multiple Identity Tracking (MIT) task, categorical distinctions between targets and distractors have been found to facilitate tracking (Wei, Zhang, Lyu, & Li in Frontiers in Psychology, 7, 589, 2016). The purpose of this study was to further investigate the reasons for the facilitation effect, through six experiments. The results of Experiments 1-3 excluded the potential explanations of visual distinctiveness, attentional distribution strategy, and a working memory mechanism, respectively. When objects' visual information was preserved and categorical information was removed, the facilitation effect disappeared, suggesting that the visual distinctiveness between targets and distractors was not the main reason for the facilitation effect. Moreover, the facilitation effect was not the result of strategically shifting the attentional distribution, because the targets received more attention than the distractors in all conditions. Additionally, the facilitation effect did not come about because the identities of targets were encoded and stored in visual working memory to assist in the recovery from tracking errors; when working memory was disturbed by the object identities changing during tracking, the facilitation effect still existed. Experiments 4 and 5 showed that observers grouped targets together and segregated them from distractors on the basis of their categorical information. By doing this, observers could largely avoid distractor interference with tracking and improve tracking performance. Finally, Experiment 6 indicated that category-based grouping is not an automatic, but a goal-directed and effortful, strategy. In summary, the present findings show that a semantic category-based target-grouping mechanism exists in the MIT task, which is likely to be the major reason for the tracking facilitation effect.
Yang, Ehwa; Gwak, Jeonghwan; Jeon, Moongu
2017-01-01
Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable. PMID:28304366
Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm.
Tombu, Michael; Seiffert, Adriane E
2011-04-01
People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target-distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking--one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
A mobile agent-based moving objects indexing algorithm in location based service
NASA Astrophysics Data System (ADS)
Fang, Zhixiang; Li, Qingquan; Xu, Hong
2006-10-01
This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.
Image sequence analysis workstation for multipoint motion analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-08-01
This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.
Neural network based satellite tracking for deep space applications
NASA Technical Reports Server (NTRS)
Amoozegar, F.; Ruggier, C.
2003-01-01
The objective of this paper is to provide a survey of neural network trends as applied to the tracking of spacecrafts in deep space at Ka-band under various weather conditions and examine the trade-off between tracing accuracy and communication link performance.
NASA Astrophysics Data System (ADS)
Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.
2016-09-01
Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.
Real-time moving objects detection and tracking from airborne infrared camera
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2017-10-01
Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.
Model-based registration of multi-rigid-body for augmented reality
NASA Astrophysics Data System (ADS)
Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro
2009-02-01
Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.
Research on infrared small-target tracking technology under complex background
NASA Astrophysics Data System (ADS)
Liu, Lei; Wang, Xin; Chen, Jilu; Pan, Tao
2012-10-01
In this paper, some basic principles and the implementing flow charts of a series of algorithms for target tracking are described. On the foundation of above works, a moving target tracking software base on the OpenCV is developed by the software developing platform MFC. Three kinds of tracking algorithms are integrated in this software. These two tracking algorithms are Kalman Filter tracking method and Camshift tracking method. In order to explain the software clearly, the framework and the function are described in this paper. At last, the implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. This paper is very significant in the application of the infrared target tracking technology.
Real-Time 3D Tracking and Reconstruction on Mobile Phones.
Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D
2015-05-01
We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.
Mid-course multi-target tracking using continuous representation
NASA Technical Reports Server (NTRS)
Zak, Michail; Toomarian, Nikzad
1991-01-01
The thrust of this paper is to present a new approach to multi-target tracking for the mid-course stage of the Strategic Defense Initiative (SDI). This approach is based upon a continuum representation of a cluster of flying objects. We assume that the velocities of the flying objects can be embedded into a smooth velocity field. This assumption is based upon the impossibility of encounters in a high density cluster between the flying objects. Therefore, the problem is reduced to an identification of a moving continuum based upon consecutive time frame observations. In contradistinction to the previous approaches, here each target is considered as a center of a small continuous neighborhood subjected to a local-affine transformation, and therefore, the target trajectories do not mix. Obviously, their mixture in plane of sensor view is apparent. The approach is illustrated by an example.
Human-like object tracking and gaze estimation with PKD android
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.
2018-01-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193
Human-like object tracking and gaze estimation with PKD android
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.
2016-05-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.
A data set for evaluating the performance of multi-class multi-object video tracking
NASA Astrophysics Data System (ADS)
Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David
2017-05-01
One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.
MRI-based dynamic tracking of an untethered ferromagnetic microcapsule navigating in liquid
NASA Astrophysics Data System (ADS)
Dahmen, Christian; Belharet, Karim; Folio, David; Ferreira, Antoine; Fatikow, Sergej
2016-04-01
The propulsion of ferromagnetic objects by means of MRI gradients is a promising approach to enable new forms of therapy. In this work, necessary techniques are presented to make this approach work. This includes path planning algorithms working on MRI data, ferromagnetic artifact imaging and a tracking algorithm which delivers position feedback for the ferromagnetic objects, and a propulsion sequence to enable interleaved magnetic propulsion and imaging. Using a dedicated software environment, integrating path-planning methods and real-time tracking, a clinical MRI system is adapted to provide this new functionality for controlled interventional targeted therapeutic applications. Through MRI-based sensing analysis, this article aims to propose a framework to plan a robust pathway to enhance the navigation ability to reach deep locations in the human body. The proposed approaches are validated with different experiments.
Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan
2017-06-06
Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.
Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan
2017-01-01
Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable. PMID:28587275
Real-time object detection, tracking and occlusion reasoning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divakaran, Ajay; Yu, Qian; Tamrakar, Amir
A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
Self-Motion Impairs Multiple-Object Tracking
ERIC Educational Resources Information Center
Thomas, Laura E.; Seiffert, Adriane E.
2010-01-01
Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement…
Visual object recognition and tracking
NASA Technical Reports Server (NTRS)
Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)
2010-01-01
This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.
Action-Driven Visual Object Tracking With Deep Reinforcement Learning.
Yun, Sangdoo; Choi, Jongwon; Yoo, Youngjoon; Yun, Kimin; Choi, Jin Young
2018-06-01
In this paper, we propose an efficient visual tracker, which directly captures a bounding box containing the target object in a video by means of sequential actions learned using deep neural networks. The proposed deep neural network to control tracking actions is pretrained using various training video sequences and fine-tuned during actual tracking for online adaptation to a change of target and background. The pretraining is done by utilizing deep reinforcement learning (RL) as well as supervised learning. The use of RL enables even partially labeled data to be successfully utilized for semisupervised learning. Through the evaluation of the object tracking benchmark data set, the proposed tracker is validated to achieve a competitive performance at three times the speed of existing deep network-based trackers. The fast version of the proposed method, which operates in real time on graphics processing unit, outperforms the state-of-the-art real-time trackers with an accuracy improvement of more than 8%.
Dogra, Debi P; Majumdar, Arun K; Sural, Shamik; Mukherjee, Jayanta; Mukherjee, Suchandra; Singh, Arun
2012-01-01
Hammersmith Infant Neurological Examination (HINE) is a set of tests used for grading neurological development of infants on a scale of 0 to 3. These tests help in assessing neurophysiological development of babies, especially preterm infants who are born before (the fetus reaches) the gestational age of 36 weeks. Such tests are often conducted in the follow-up clinics of hospitals for grading infants with suspected disabilities. Assessment based on HINE depends on the expertise of the physicians involved in conducting the examinations. It has been noted that some of these tests, especially pulled-to-sit and lateral tilting, are difficult to assess solely based on visual observation. For example, during the pulled-to-sit examination, the examiner needs to observe the relative movement of the head with respect to torso while pulling the infant by holding wrists. The examiner may find it difficult to follow the head movement from the coronal view. Video object tracking based automatic or semi-automatic analysis can be helpful in this case. In this paper, we present a video based method to automate the analysis of pulled-to-sit examination. In this context, a dynamic programming and node pruning based efficient video object tracking algorithm has been proposed. Pulled-to-sit event detection is handled by the proposed tracking algorithm that uses a 2-D geometric model of the scene. The algorithm has been tested with normal as well as marker based videos of the examination recorded at the neuro-development clinic of the SSKM Hospital, Kolkata, India. It is found that the proposed algorithm is capable of estimating the pulled-to-sit score with sensitivity (80%-92%) and specificity (89%-96%).
On the Limits of Infants' Quantification of Small Object Arrays
ERIC Educational Resources Information Center
Feigenson, Lisa; Carey, Susan
2005-01-01
Recent work suggests that infants rely on mechanisms of object-based attention and short-term memory to represent small numbers of objects. Such work shows that infants discriminate arrays containing 1, 2, or 3 objects, but fail with arrays greater than 3 [Feigenson, L., & Carey, S. (2003). Tracking individuals via object-files: Evidence from…
Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study
ERIC Educational Resources Information Center
Bulf, Hermann; Valenza, Eloisa
2013-01-01
Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
NASA Technical Reports Server (NTRS)
Porter, D. W.; Lefler, R. M.
1979-01-01
A generalized hypothesis testing approach is applied to the problem of tracking several objects where several different associations of data with objects are possible. Such problems occur, for instance, when attempting to distinctly track several aircraft maneuvering near each other or when tracking ships at sea. Conceptually, the problem is solved by first, associating data with objects in a statistically reasonable fashion and then, tracking with a bank of Kalman filters. The objects are assumed to have motion characterized by a fixed but unknown deterministic portion plus a random process portion modeled by a shaping filter. For example, the object might be assumed to have a mean straight line path about which it maneuvers in a random manner. Several hypothesized associations of data with objects are possible because of ambiguity as to which object the data comes from, false alarm/detection errors, and possible uncertainty in the number of objects being tracked. The statistical likelihood function is computed for each possible hypothesized association of data with objects. Then the generalized likelihood is computed by maximizing the likelihood over parameters that define the deterministic motion of the object.
Methods and apparatus for extraction and tracking of objects from multi-dimensional sequence data
NASA Technical Reports Server (NTRS)
Hill, Matthew L. (Inventor); Chang, Yuan-Chi (Inventor); Li, Chung-Sheng (Inventor); Castelli, Vittorio (Inventor); Bergman, Lawrence David (Inventor)
2008-01-01
An object tracking technique is provided which, given: (i) a potentially large data set; (ii) a set of dimensions along which the data has been ordered; and (iii) a set of functions for measuring the similarity between data elements, a set of objects are produced. Each of these objects is defined by a list of data elements. Each of the data elements on this list contains the probability that the data element is part of the object. The method produces these lists via an adaptive, knowledge-based search function which directs the search for high-probability data elements. This serves to reduce the number of data element combinations evaluated while preserving the most flexibility in defining the associations of data elements which comprise an object.
Methods and apparatus for extraction and tracking of objects from multi-dimensional sequence data
NASA Technical Reports Server (NTRS)
Hill, Matthew L. (Inventor); Chang, Yuan-Chi (Inventor); Li, Chung-Sheng (Inventor); Castelli, Vittorio (Inventor); Bergman, Lawrence David (Inventor)
2005-01-01
An object tracking technique is provided which, given: (i) a potentially large data set; (ii) a set of dimensions along which the data has been ordered; and (iii) a set of functions for measuring the similarity between data elements, a set of objects are produced. Each of these objects is defined by a list of data elements. Each of the data elements on this list contains the probability that the data element is part of the object. The method produces these lists via an adaptive, knowledge-based search function which directs the search for high-probability data elements. This serves to reduce the number of data element combinations evaluated while preserving the most flexibility in defining the associations of data elements which comprise an object.
Hue distinctiveness overrides category in determining performance in multiple object tracking.
Sun, Mengdan; Zhang, Xuemin; Fan, Lingxia; Hu, Luming
2018-02-01
The visual distinctiveness between targets and distractors can significantly facilitate performance in multiple object tracking (MOT), in which color is a feature that has been commonly used. However, the processing of color can be more than "visual." Color is continuous in chromaticity, while it is commonly grouped into discrete categories (e.g., red, green). Evidence from color perception suggested that color categories may have a unique role in visual tasks independent of its chromatic appearance. Previous MOT studies have not examined the effect of chromatic and categorical distinctiveness on tracking separately. The current study aimed to reveal how chromatic (hue) and categorical distinctiveness of color between the targets and distractors affects tracking performance. With four experiments, we showed that tracking performance was largely facilitated by the increasing hue distance between the target set and the distractor set, suggesting that perceptual grouping was formed based on hue distinctiveness to aid tracking. However, we found no color categorical effect, because tracking performance was not significantly different when the targets and distractors were from the same or different categories. It was concluded that the chromatic distinctiveness of color overrides category in determining tracking performance, suggesting a dominant role of perceptual feature in MOT.
Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm
Tombu, Michael
2014-01-01
People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target–distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking—one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone. PMID:21264704
Real-time classification of vehicles by type within infrared imagery
NASA Astrophysics Data System (ADS)
Kundegorski, Mikolaj E.; Akçay, Samet; Payen de La Garanderie, Grégoire; Breckon, Toby P.
2016-10-01
Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.
Multiple-object tracking while driving: the multiple-vehicle tracking task.
Lochner, Martin J; Trick, Lana M
2014-11-01
Many contend that driving an automobile involves multiple-object tracking. At this point, no one has tested this idea, and it is unclear how multiple-object tracking would coordinate with the other activities involved in driving. To address some of the initial and most basic questions about multiple-object tracking while driving, we modified the tracking task for use in a driving simulator, creating the multiple-vehicle tracking task. In Experiment 1, we employed a dual-task methodology to determine whether there was interference between tracking and driving. Findings suggest that although it is possible to track multiple vehicles while driving, driving reduces tracking performance, and tracking compromises headway and lane position maintenance while driving. Modified change-detection paradigms were used to assess whether there were change localization advantages for tracked targets in multiple-vehicle tracking. When changes occurred during a blanking interval, drivers were more accurate (Experiment 2a) and ~250 ms faster (Experiment 2b) at locating the vehicle that changed when it was a target rather than a distractor in tracking. In a more realistic driving task where drivers had to brake in response to the sudden onset of brake lights in one of the lead vehicles, drivers were more accurate at localizing the vehicle that braked if it was a tracking target, although there was no advantage in terms of braking response time. Overall, results suggest that multiple-object tracking is possible while driving and perhaps even advantageous in some situations, but further research is required to determine whether multiple-object tracking is actually used in day-to-day driving.
Designing and Developing Web-Based Administrative Tools for Program Management
NASA Technical Reports Server (NTRS)
Gutensohn, Michael
2017-01-01
The task assigned for this internship was to develop a new tool for tracking projects, their subsystems, the leads, backups, and other employees assigned to them, as well as all the relevant information related to the employee (WBS (time charge) codes, time distribution, certifications, and assignments). Currently, this data is tracked manually using a number of different spreadsheets and other tools simultaneously by a number of different people; some of these documents are then merged into one large document. This often leads to inconsistencies and loss in data due to human error. By simplifying the process of tracking this data and aggregating it into a single tool, it is possible to significantly decrease the potential for human error and time spent collecting and checking this information. II. Objective The main objective of this internship is to develop a web-based tool using Ruby on Rails to serve as a method of easily tracking projects, subsystems, and points of contact, along with employees, their assignments, time distribution, certifications, and contact information. Additionally, this tool must be capable of generating a number of different reports based on the data collected. It was important that this tool deliver all of this information using a readable and intuitive interface.
Li, Jia; Xia, Changqun; Chen, Xiaowu
2017-10-12
Image-based salient object detection (SOD) has been extensively studied in past decades. However, video-based SOD is much less explored due to the lack of large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos. In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects who free-view all videos. From the user data, we find that salient objects in a video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object/region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for videobased salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliencyguided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at the pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are constructed in an unsupervised manner that automatically infers a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. In experiments, the proposed unsupervised approach is compared with 31 state-of-the-art models on the proposed dataset and outperforms 30 of them, including 19 imagebased classic (unsupervised or non-deep learning) models, six image-based deep learning models, and five video-based unsupervised models. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
Carbide-reinforced metal matrix composite by direct metal deposition
NASA Astrophysics Data System (ADS)
Novichenko, D.; Thivillon, L.; Bertrand, Ph.; Smurov, I.
Direct metal deposition (DMD) is an automated 3D laser cladding technology with co-axial powder injection for industrial applications. The actual objective is to demonstrate the possibility to produce metal matrix composite objects in a single-step process. Powders of Fe-based alloy (16NCD13) and titanium carbide (TiC) are premixed before cladding. Volume content of the carbide-reinforced phase is varied. Relationships between the main laser cladding parameters and the geometry of the built-up objects (single track, 2D coating) are discussed. On the base of parametric study, a laser cladding process map for the deposition of individual tracks was established. Microstructure and composition of the laser-fabricated metal matrix composite objects are examined. Two different types of structures: (a) with the presence of undissolved and (b) precipitated titanium carbides are observed. Mechanism of formation of diverse precipitated titanium carbides is studied.
Improvements in Space Surveillance Processing for Wide Field of View Optical Sensors
NASA Astrophysics Data System (ADS)
Sydney, P.; Wetterer, C.
2014-09-01
For more than a decade, an autonomous satellite tracking system at the Air Force Maui Optical and Supercomputing (AMOS) observatory has been generating routine astrometric measurements of Earth-orbiting Resident Space Objects (RSOs) using small commercial telescopes and sensors. Recent work has focused on developing an improved processing system, enhancing measurement performance and response while supporting other sensor systems and missions. This paper will outline improved techniques in scheduling, detection, astrometric and photometric measurements, and catalog maintenance. The processing system now integrates with Special Perturbation (SP) based astrodynamics algorithms, allowing covariance-based scheduling and more precise orbital estimates and object identification. A merit-based scheduling algorithm provides a global optimization framework to support diverse collection tasks and missions. The detection algorithms support a range of target tracking and camera acquisition rates. New comprehensive star catalogs allow for more precise astrometric and photometric calibrations including differential photometry for monitoring environmental changes. This paper will also examine measurement performance with varying tracking rates and acquisition parameters.
Robust skin color-based moving object detection for video surveillance
NASA Astrophysics Data System (ADS)
Kaliraj, Kalirajan; Manimaran, Sudha
2016-07-01
Robust skin color-based moving object detection for video surveillance is proposed. The objective of the proposed algorithm is to detect and track the target under complex situations. The proposed framework comprises four stages, which include preprocessing, skin color-based feature detection, feature classification, and target localization and tracking. In the preprocessing stage, the input image frame is smoothed using averaging filter and transformed into YCrCb color space. In skin color detection, skin color regions are detected using Otsu's method of global thresholding. In the feature classification, histograms of both skin and nonskin regions are constructed and the features are classified into foregrounds and backgrounds based on Bayesian skin color classifier. The foreground skin regions are localized by a connected component labeling process. Finally, the localized foreground skin regions are confirmed as a target by verifying the region properties, and nontarget regions are rejected using the Euler method. At last, the target is tracked by enclosing the bounding box around the target region in all video frames. The experiment was conducted on various publicly available data sets and the performance was evaluated with baseline methods. It evidently shows that the proposed algorithm works well against slowly varying illumination, target rotations, scaling, fast, and abrupt motion changes.
Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature
Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat
2014-01-01
It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185
ERIC Educational Resources Information Center
Hwu, Fenfang
2013-01-01
Using script-based tracking to gain insights into the way students learn or process language information can be traced as far back as to the 1980s. Nevertheless, researchers continue to face challenges in collecting and studying this type of data. The objective of this study is to propose data sharing through data repositories as a way to (a) ease…
NASA Technical Reports Server (NTRS)
Mikic, I.; Krucinski, S.; Thomas, J. D.
1998-01-01
This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.
Image-based tracking: a new emerging standard
NASA Astrophysics Data System (ADS)
Antonisse, Jim; Randall, Scott
2012-06-01
Automated moving object detection and tracking are increasingly viewed as solutions to the enormous data volumes resulting from emerging wide-area persistent surveillance systems. In a previous paper we described a Motion Imagery Standards Board (MISB) initiative to help address this problem: the specification of a micro-architecture for the automatic extraction of motion indicators and tracks. This paper reports on the development of an extended specification of the plug-and-play tracking micro-architecture, on its status as an emerging standard across DoD, the Intelligence Community, and NATO.
Learned filters for object detection in multi-object visual tracking
NASA Astrophysics Data System (ADS)
Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David
2016-05-01
We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects.
Kang, Ziho; Mandal, Saptarshi; Crutchfield, Jerry; Millan, Angel; McClung, Sarah N
2016-01-01
Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.
Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects
Mandal, Saptarshi
2016-01-01
Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830
The what-where trade-off in multiple-identity tracking.
Cohen, Michael A; Pinto, Yair; Howe, Piers D L; Horowitz, Todd S
2011-07-01
Observers are poor at reporting the identities of objects that they have successfully tracked (Pylyshyn, Visual Cognition, 11, 801-822, 2004; Scholl & Pylyshyn, Cognitive Psychology, 38, 259-290, 1999). Consequently, it has been claimed that objects are tracked in a manner that does not encode their identities (Pylyshyn, 2004). Here, we present evidence that disputes this claim. In a series of experiments, we show that attempting to track the identities of objects can decrease an observer's ability to track the objects' locations. This indicates that the mechanisms that track, respectively, the locations and identities of objects draw upon a common resource. Furthermore, we show that this common resource can be voluntarily distributed between the two mechanisms. This is clear evidence that the location- and identity-tracking mechanisms are not entirely dissociable.
Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter
NASA Astrophysics Data System (ADS)
Murphy, T.; Holzinger, M.
2016-09-01
Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.
A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors
Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.
2017-01-01
Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563
Horowitz, Todd S.; Kuzmova, Yoana
2011-01-01
The evidence is mixed as to whether the visual system treats objects and holes differently. We used a multiple object tracking task to test the hypothesis that figural objects are easier to track than holes. Observers tracked four of eight items (holes or objects). We used an adaptive algorithm to estimate the speed allowing 75% tracking accuracy. In Experiments 1–5, the distinction between holes and figures was accomplished by pictorial cues, while red-cyan anaglyphs were used to provide the illusion of depth in Experiment 6. We variously used Gaussian pixel noise, photographic scenes, or synthetic textures as backgrounds. Tracking was more difficult when a complex background was visible, as opposed to a blank background. Tracking was easier when disks carried fixed, unique markings. When these factors were controlled for, tracking holes was no more difficult than tracking figures, suggesting that they are equivalent stimuli for tracking purposes. PMID:21334361
Cortical Circuit for Binding Object Identity and Location During Multiple-Object Tracking
Nummenmaa, Lauri; Oksama, Lauri; Glerean, Erico; Hyönä, Jukka
2017-01-01
Abstract Sustained multifocal attention for moving targets requires binding object identities with their locations. The brain mechanisms of identity-location binding during attentive tracking have remained unresolved. In 2 functional magnetic resonance imaging experiments, we measured participants’ hemodynamic activity during attentive tracking of multiple objects with equivalent (multiple-object tracking) versus distinct (multiple identity tracking, MIT) identities. Task load was manipulated parametrically. Both tasks activated large frontoparietal circuits. MIT led to significantly increased activity in frontoparietal and temporal systems subserving object recognition and working memory. These effects were replicated when eye movements were prohibited. MIT was associated with significantly increased functional connectivity between lateral temporal and frontal and parietal regions. We propose that coordinated activity of this network subserves identity-location binding during attentive tracking. PMID:27913430
Towards practical control design using neural computation
NASA Technical Reports Server (NTRS)
Troudet, Terry; Garg, Sanjay; Mattern, Duane; Merrill, Walter
1991-01-01
The objective is to develop neural network based control design techniques which address the issue of performance/control effort tradeoff. Additionally, the control design needs to address the important issue if achieving adequate performance in the presence of actuator nonlinearities such as position and rate limits. These issues are discussed using the example of aircraft flight control. Given a set of pilot input commands, a feedforward net is trained to control the vehicle within the constraints imposed by the actuators. This is achieved by minimizing an objective function which is the sum of the tracking errors, control input rates and control input deflections. A tradeoff between tracking performance and control smoothness is obtained by varying, adaptively, the weights of the objective function. The neurocontroller performance is evaluated in the presence of actuator dynamics using a simulation of the vehicle. Appropriate selection of the different weights in the objective function resulted in the good tracking of the pilot commands and smooth neurocontrol. An extension of the neurocontroller design approach is proposed to enhance its practicality.
A Computational Model of Spatial Development
NASA Astrophysics Data System (ADS)
Hiraki, Kazuo; Sashima, Akio; Phillips, Steven
Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
NASA Astrophysics Data System (ADS)
DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.
2012-06-01
As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented. The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object characterization, jointly, in computing reward and cost for optimizing tasking decisions.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods.
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J
2017-03-03
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter.
Image-Based Multi-Target Tracking through Multi-Bernoulli Filtering with Interactive Likelihoods
Hoak, Anthony; Medeiros, Henry; Povinelli, Richard J.
2017-01-01
We develop an interactive likelihood (ILH) for sequential Monte Carlo (SMC) methods for image-based multiple target tracking applications. The purpose of the ILH is to improve tracking accuracy by reducing the need for data association. In addition, we integrate a recently developed deep neural network for pedestrian detection along with the ILH with a multi-Bernoulli filter. We evaluate the performance of the multi-Bernoulli filter with the ILH and the pedestrian detector in a number of publicly available datasets (2003 PETS INMOVE, Australian Rules Football League (AFL) and TUD-Stadtmitte) using standard, well-known multi-target tracking metrics (optimal sub-pattern assignment (OSPA) and classification of events, activities and relationships for multi-object trackers (CLEAR MOT)). In all datasets, the ILH term increases the tracking accuracy of the multi-Bernoulli filter. PMID:28273796
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
NASA Astrophysics Data System (ADS)
Pak, A.; Correa, J.; Adams, M.; Clark, D.; Delande, E.; Houssineau, J.; Franco, J.; Frueh, C.
2016-09-01
Recently, the growing number of inactive Resident Space Objects (RSOs), or space debris, has provoked increased interest in the field of Space Situational Awareness (SSA) and various investigations of new methods for orbital object tracking. In comparison with conventional tracking scenarios, state estimation of an orbiting object entails additional challenges, such as orbit determination and orbital state and covariance propagation in the presence of highly nonlinear system dynamics. The sensors which are available for detecting and tracking space debris are prone to multiple clutter measurements. Added to this problem, is the fact that it is unknown whether or not a space debris type target is present within such sensor measurements. Under these circumstances, traditional single-target filtering solutions such as Kalman Filters fail to produce useful trajectory estimates. The recent Random Finite Set (RFS) based Finite Set Statistical (FISST) framework has yielded filters which are more appropriate for such situations. The RFS based Joint Target Detection and Tracking (JoTT) filter, also known as the Bernoulli filter, is a single target, multiple measurements filter capable of dealing with cluttered and time-varying backgrounds as well as modeling target appearance and disappearance in the scene. Therefore, this paper presents the application of the Gaussian mixture-based JoTT filter for processing measurements from Chilbolton Advanced Meteorological Radar (CAMRa) which contain both defunct and operational satellites. The CAMRa is a fully-steerable radar located in southern England, which was recently modified to be used as a tracking asset in the European Space Agency SSA program. The experiments conducted show promising results regarding the capability of such filters in processing cluttered radar data. The work carried out in this paper was funded by the USAF Grant No. FA9550-15-1-0069, Chilean Conicyt - Fondecyt grant number 1150930, EU Erasmus Mundus MSc Scholarship, Defense Science and Technology Laboratory (DSTL), U. K., and the Chilean Conicyt, Fondecyt project grant number 1150930.
Real-time detecting and tracking ball with OpenCV and Kinect
NASA Astrophysics Data System (ADS)
Osiecki, Tomasz; Jankowski, Stanislaw
2016-09-01
This paper presents a way to detect and track ball with using the OpenCV and Kinect. Object and people recognition, tracking are more and more popular topics nowadays. Described solution makes it possible to detect ball based on the range, which is set by the user and capture information about ball position in three dimensions. It can be store in the computer and use for example to display trajectory of the ball.
Object-oriented feature-tracking algorithms for SAR images of the marginal ice zone
NASA Technical Reports Server (NTRS)
Daida, Jason; Samadani, Ramin; Vesecky, John F.
1990-01-01
An unsupervised method that chooses and applies the most appropriate tracking algorithm from among different sea-ice tracking algorithms is reported. In contrast to current unsupervised methods, this method chooses and applies an algorithm by partially examining a sequential image pair to draw inferences about what was examined. Based on these inferences the reported method subsequently chooses which algorithm to apply to specific areas of the image pair where that algorithm should work best.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, B.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Low, D.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berbeco, R.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keall, P.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
Active contour-based visual tracking by integrating colors, shapes, and motions.
Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen
2013-05-01
In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.
Track-to-track association for object matching in an inter-vehicle communication system
NASA Astrophysics Data System (ADS)
Yuan, Ting; Roth, Tobias; Chen, Qi; Breu, Jakob; Bogdanovic, Miro; Weiss, Christian A.
2015-09-01
Autonomous driving poses unique challenges for vehicle environment perception due to the complex driving environment the autonomous vehicle finds itself in and differentiates from remote vehicles. Due to inherent uncertainty of the traffic environments and incomplete knowledge due to sensor limitation, an autonomous driving system using only local onboard sensor information is generally not sufficiently enough for conducting a reliable intelligent driving with guaranteed safety. In order to overcome limitations of the local (host) vehicle sensing system and to increase the likelihood of correct detections and classifications, collaborative information from cooperative remote vehicles could substantially facilitate effectiveness of vehicle decision making process. Dedicated Short Range Communication (DSRC) system provides a powerful inter-vehicle wireless communication channel to enhance host vehicle environment perceiving capability with the aid of transmitted information from remote vehicles. However, there is a major challenge before one can fuse the DSRC-transmitted remote information and host vehicle Radar-observed information (in the present case): the remote DRSC data must be correctly associated with the corresponding onboard Radar data; namely, an object matching problem. Direct raw data association (i.e., measurement-to-measurement association - M2MA) is straightforward but error-prone, due to inherent uncertain nature of the observation data. The uncertainties could lead to serious difficulty in matching decision, especially, using non-stationary data. In this study, we present an object matching algorithm based on track-to-track association (T2TA) and evaluate the proposed approach with prototype vehicles in real traffic scenarios. To fully exploit potential of the DSRC system, only GPS position data from remote vehicle are used in fusion center (at host vehicle), i.e., we try to get what we need from the least amount of information; additional feature information can help the data association but are not currently considered. Comparing to M2MA, benefits of the T2TA object matching approach are: i) tracks taking into account important statistical information can provide more reliable inference results; ii) the track-formed smoothed trajectories can be used for an easier shape matching; iii) each local vehicle can design its own tracker and sends only tracks to fusion center to alleviate communication constraints. A real traffic study with different driving environments, based on a statistical hypothesis test, shows promising object matching results of significant practical implications.
Object tracking using plenoptic image sequences
NASA Astrophysics Data System (ADS)
Kim, Jae Woo; Bae, Seong-Joon; Park, Seongjin; Kim, Do Hyung
2017-05-01
Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.
Resolving occlusion and segmentation errors in multiple video object tracking
NASA Astrophysics Data System (ADS)
Cheng, Hsu-Yung; Hwang, Jenq-Neng
2009-02-01
In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.
Estimation of contour motion and deformation for nonrigid object tracking
NASA Astrophysics Data System (ADS)
Shao, Jie; Porikli, Fatih; Chellappa, Rama
2007-08-01
We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.
NASA Astrophysics Data System (ADS)
Gao, Haibo; Chen, Chao; Ding, Liang; Li, Weihua; Yu, Haitao; Xia, Kerui; Liu, Zhen
2017-11-01
Wheeled mobile robots (WMRs) often suffer from the longitudinal slipping when moving on the loose soil of the surface of the moon during exploration. Longitudinal slip is the main cause of WMRs' delay in trajectory tracking. In this paper, a nonlinear extended state observer (NESO) is introduced to estimate the longitudinal velocity in order to estimate the slip ratio and the derivative of the loss of velocity which are used in modelled disturbance compensation. Owing to the uncertainty and disturbance caused by estimation errors, a multi-objective controller using the mixed H2/H∞ method is employed to ensure the robust stability and performance of the WMR system. The final inputs of the trajectory tracking consist of the feedforward compensation, compensation for the modelled disturbances and designed multi-objective control inputs. Finally, the simulation results demonstrate the effectiveness of the controller, which exhibits a satisfactory tracking performance.
Summary of Aqua, Aura, and Terra High Interest Events
NASA Technical Reports Server (NTRS)
Newman, Lauri
2015-01-01
Single-obs tracking Sparsely tracked objects are an unfortunate reality of CARA operations Terra vs. 32081: new track with bad data was included in OD solution for secondary object and risk became high CARA and JSpOC discussed tracking and OSAs threw out the bad data. Event no longer presented high risk based on new OD Improvement: CARA now sends JSpOC a flag indicating when a single obs is included, so OSAs can evaluate if manual update to OD is required. Missing ASW OCMsAura vs. 87178, TCA: 317 at 08:04 UTC. Post-maneuver risk (conjunction was identified in OO results)CARA confirmed with JSpOC that ASW OCMs should have been received in addition to OO OCMsJSpOC corrected the manual error in their script that prevented the data from being delivered to CARAJSpOC QAd their other scripts to ensure this error did not exist in other places.
Model-based vision for space applications
NASA Technical Reports Server (NTRS)
Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald
1992-01-01
This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.
Object tracking algorithm based on the color histogram probability distribution
NASA Astrophysics Data System (ADS)
Li, Ning; Lu, Tongwei; Zhang, Yanduo
2018-04-01
In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.
Fast Object Motion Estimation Based on Dynamic Stixels.
Morales, Néstor; Morell, Antonio; Toledo, Jonay; Acosta, Leopoldo
2016-07-28
The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the first level, stixels are tracked by matching them between frames using a bipartite graph in which edges represent a matching cost function. Then, stixels are clustered into sets representing objects in the environment. These objects are matched based on the number of stixels paired inside them. Furthermore, a faster, but less accurate approach is proposed in which only the second level is used. Several configurations of our method are compared to an existing state-of-the-art approach to show how our methodology outperforms it in several areas, including an improvement in the quality of the depth reconstruction.
ERIC Educational Resources Information Center
Rattanarungrot, Sasithorn; White, Martin; Newbury, Paul
2014-01-01
This paper describes the design of our service-oriented architecture to support mobile multiple object tracking augmented reality applications applied to education and learning scenarios. The architecture is composed of a mobile multiple object tracking augmented reality client, a web service framework, and dynamic content providers. Tracking of…
Scalable Conjunction Processing using Spatiotemporally Indexed Ephemeris Data
NASA Astrophysics Data System (ADS)
Budianto-Ho, I.; Johnson, S.; Sivilli, R.; Alberty, C.; Scarberry, R.
2014-09-01
The collision warnings produced by the Joint Space Operations Center (JSpOC) are of critical importance in protecting U.S. and allied spacecraft against destructive collisions and protecting the lives of astronauts during space flight. As the Space Surveillance Network (SSN) improves its sensor capabilities for tracking small and dim space objects, the number of tracked objects increases from thousands to hundreds of thousands of objects, while the number of potential conjunctions increases with the square of the number of tracked objects. Classical filtering techniques such as apogee and perigee filters have proven insufficient. Novel and orders of magnitude faster conjunction analysis algorithms are required to find conjunctions in a timely manner. Stellar Science has developed innovative filtering techniques for satellite conjunction processing using spatiotemporally indexed ephemeris data that efficiently and accurately reduces the number of objects requiring high-fidelity and computationally-intensive conjunction analysis. Two such algorithms, one based on the k-d Tree pioneered in robotics applications and the other based on Spatial Hash Tables used in computer gaming and animation, use, at worst, an initial O(N log N) preprocessing pass (where N is the number of tracked objects) to build large O(N) spatial data structures that substantially reduce the required number of O(N^2) computations, substituting linear memory usage for quadratic processing time. The filters have been implemented as Open Services Gateway initiative (OSGi) plug-ins for the Continuous Anomalous Orbital Situation Discriminator (CAOS-D) conjunction analysis architecture. We have demonstrated the effectiveness, efficiency, and scalability of the techniques using a catalog of 100,000 objects, an analysis window of one day, on a 64-core computer with 1TB shared memory. Each algorithm can process the full catalog in 6 minutes or less, almost a twenty-fold performance improvement over the baseline implementation running on the same machine. We will present an overview of the algorithms and results that demonstrate the scalability of our concepts.
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas
2008-01-01
PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.
Al-Nawashi, Malek; Al-Hazaimeh, Obaida M; Saraee, Mohamad
2017-01-01
Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.
ERIC Educational Resources Information Center
Journal of the American Academy of Child & Adolescent Psychiatry, 2007
2007-01-01
Objective: This study tests the efficacy of the Fast Track Program in preventing antisocial behavior and psychiatric disorders among groups varying in initial risk. Method: Schools within four sites (Durham, NC; Nashville, TN; Seattle, WA; and rural central Pennsylvania) were selected as high-risk institutions based on neighborhood crime and…
Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Srihasam, Krishna; Bullock, Daniel; Grossberg, Stephen
2009-01-01
Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do…
Making Tracks 1.0: Action Researching an Active Transportation Education Program
ERIC Educational Resources Information Center
Robinson, Daniel; Foran, Andrew; Robinson, Ingrid
2014-01-01
This paper reports on the results of the first cycle of an action research project. The objective of this action research was to examine the implementation of a school-based active transportation education program (Making Tracks). A two-cycle action research design was employed in which elementary school students' (ages 7-9), middle school…
MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.
Kim, Young-Keun; Kim, Kyung-Soo
2014-10-01
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.
Real-time object tracking based on scale-invariant features employing bio-inspired hardware.
Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya
2016-09-01
We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Young-Keun; Kim, Kyung-Soo
2014-10-01
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.
NASA Astrophysics Data System (ADS)
Oku, H.; Ogawa, N.; Ishikawa, M.; Hashimoto, K.
2005-03-01
In this article, a micro-organism tracking system using a high-speed vision system is reported. This system two dimensionally tracks a freely swimming micro-organism within the field of an optical microscope by moving a chamber of target micro-organisms based on high-speed visual feedback. The system we developed could track a paramecium using various imaging techniques, including bright-field illumination, dark-field illumination, and differential interference contrast, at magnifications of 5 times and 20 times. A maximum tracking duration of 300s was demonstrated. Also, the system could track an object with a velocity of up to 35 000μm/s (175diameters/s), which is significantly faster than swimming micro-organisms.
Uninformative Prior Multiple Target Tracking Using Evidential Particle Filters
NASA Astrophysics Data System (ADS)
Worthy, J. L., III; Holzinger, M. J.
Space situational awareness requires the ability to initialize state estimation from short measurements and the reliable association of observations to support the characterization of the space environment. The electro-optical systems used to observe space objects cannot fully characterize the state of an object given a short, unobservable sequence of measurements. Further, it is difficult to associate these short-arc measurements if many such measurements are generated through the observation of a cluster of satellites, debris from a satellite break-up, or from spurious detections of an object. An optimization based, probabilistic short-arc observation association approach coupled with a Dempster-Shafer based evidential particle filter in a multiple target tracking framework is developed and proposed to address these problems. The optimization based approach is shown in literature to be computationally efficient and can produce probabilities of association, state estimates, and covariances while accounting for systemic errors. Rigorous application of Dempster-Shafer theory is shown to be effective at enabling ignorance to be properly accounted for in estimation by augmenting probability with belief and plausibility. The proposed multiple hypothesis framework will use a non-exclusive hypothesis formulation of Dempster-Shafer theory to assign belief mass to candidate association pairs and generate tracks based on the belief to plausibility ratio. The proposed algorithm is demonstrated using simulated observations of a GEO satellite breakup scenario.
Baigzadehnoe, Barmak; Rahmani, Zahra; Khosravi, Alireza; Rezaie, Behrooz
2017-09-01
In this paper, the position and force tracking control problem of cooperative robot manipulator system handling a common rigid object with unknown dynamical models and unknown external disturbances is investigated. The universal approximation properties of fuzzy logic systems are employed to estimate the unknown system dynamics. On the other hand, by defining new state variables based on the integral and differential of position and orientation errors of the grasped object, the error system of coordinated robot manipulators is constructed. Subsequently by defining the appropriate change of coordinates and using the backstepping design strategy, an adaptive fuzzy backstepping position tracking control scheme is proposed for multi-robot manipulator systems. By utilizing the properties of internal forces, extra terms are also added to the control signals to consider the force tracking problem. Moreover, it is shown that the proposed adaptive fuzzy backstepping position/force control approach ensures all the signals of the closed loop system uniformly ultimately bounded and tracking errors of both positions and forces can converge to small desired values by proper selection of the design parameters. Finally, the theoretic achievements are tested on the two three-link planar robot manipulators cooperatively handling a common object to illustrate the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features
NASA Astrophysics Data System (ADS)
Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique
2011-12-01
We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.
Accurate mask-based spatially regularized correlation filter for visual tracking
NASA Astrophysics Data System (ADS)
Gu, Xiaodong; Xu, Xinping
2017-01-01
Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.
Enhanced object-based tracking algorithm for convective rain storms and cells
NASA Astrophysics Data System (ADS)
Muñoz, Carlos; Wang, Li-Pen; Willems, Patrick
2018-03-01
This paper proposes a new object-based storm tracking algorithm, based upon TITAN (Thunderstorm Identification, Tracking, Analysis and Nowcasting). TITAN is a widely-used convective storm tracking algorithm but has limitations in handling small-scale yet high-intensity storm entities due to its single-threshold identification approach. It also has difficulties to effectively track fast-moving storms because of the employed matching approach that largely relies on the overlapping areas between successive storm entities. To address these deficiencies, a number of modifications are proposed and tested in this paper. These include a two-stage multi-threshold storm identification, a new formulation for characterizing storm's physical features, and an enhanced matching technique in synergy with an optical-flow storm field tracker, as well as, according to these modifications, a more complex merging and splitting scheme. High-resolution (5-min and 529-m) radar reflectivity data for 18 storm events over Belgium are used to calibrate and evaluate the algorithm. The performance of the proposed algorithm is compared with that of the original TITAN. The results suggest that the proposed algorithm can better isolate and match convective rainfall entities, as well as to provide more reliable and detailed motion estimates. Furthermore, the improvement is found to be more significant for higher rainfall intensities. The new algorithm has the potential to serve as a basis for further applications, such as storm nowcasting and long-term stochastic spatial and temporal rainfall generation.
Multiple objects tracking in fluorescence microscopy.
Kalaidzidis, Yannis
2009-01-01
Many processes in cell biology are connected to the movement of compact entities: intracellular vesicles and even single molecules. The tracking of individual objects is important for understanding cellular dynamics. Here we describe the tracking algorithms which have been developed in the non-biological fields and successfully applied to object detection and tracking in biological applications. The characteristics features of the different algorithms are compared.
ERIC Educational Resources Information Center
Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara
2016-01-01
The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…
NASA Astrophysics Data System (ADS)
Gambi, J. M.; García del Pino, M. L.; Gschwindl, J.; Weinmüller, E. B.
2017-12-01
This paper deals with the problem of throwing middle-sized low Earth orbit debris objects into the atmosphere via laser ablation. The post-Newtonian equations here provided allow (hypothetical) space-based acquisition, pointing and tracking systems endowed with very narrow laser beams to reach the pointing accuracy presently prescribed. In fact, whatever the orbital elements of these objects may be, these equations will allow the operators to account for the corrections needed to balance the deviations of the line of sight directions due to the curvature of the paths the laser beams are to travel along. To minimize the respective corrections, the systems will have to perform initial positioning manoeuvres, and the shooting point-ahead angles will have to be adapted in real time. The enclosed numerical experiments suggest that neglecting these measures will cause fatal errors, due to differences in the actual locations of the objects comparable to their size.
Moving Object Detection Using a Parallax Shift Vector Algorithm
NASA Astrophysics Data System (ADS)
Gural, Peter S.; Otto, Paul R.; Tedesco, Edward F.
2018-07-01
There are various algorithms currently in use to detect asteroids from ground-based observatories, but they are generally restricted to linear or mildly curved movement of the target object across the field of view. Space-based sensors in high inclination, low Earth orbits can induce significant parallax in a collected sequence of images, especially for objects at the typical distances of asteroids in the inner solar system. This results in a highly nonlinear motion pattern of the asteroid across the sensor, which requires a more sophisticated search pattern for detection processing. Both the classical pattern matching used in ground-based asteroid search and the more sensitive matched filtering and synthetic tracking techniques, can be adapted to account for highly complex parallax motion. A new shift vector generation methodology is discussed along with its impacts on commonly used detection algorithms, processing load, and responsiveness to asteroid track reporting. The matched filter, template generator, and pattern matcher source code for the software described herein are available via GitHub.
Discriminative object tracking via sparse representation and online dictionary learning.
Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua
2014-04-01
We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.
NASA Astrophysics Data System (ADS)
Fujimoto, K.; Yanagisawa, T.; Uetsuhara, M.
Automated detection and tracking of faint objects in optical, or bearing-only, sensor imagery is a topic of immense interest in space surveillance. Robust methods in this realm will lead to better space situational awareness (SSA) while reducing the cost of sensors and optics. They are especially relevant in the search for high area-to-mass ratio (HAMR) objects, as their apparent brightness can change significantly over time. A track-before-detect (TBD) approach has been shown to be suitable for faint, low signal-to-noise ratio (SNR) images of resident space objects (RSOs). TBD does not rely upon the extraction of feature points within the image based on some thresholding criteria, but rather directly takes as input the intensity information from the image file. Not only is all of the available information from the image used, TBD avoids the computational intractability of the conventional feature-based line detection (i.e., "string of pearls") approach to track detection for low SNR data. Implementation of TBD rooted in finite set statistics (FISST) theory has been proposed recently by Vo, et al. Compared to other TBD methods applied so far to SSA, such as the stacking method or multi-pass multi-period denoising, the FISST approach is statistically rigorous and has been shown to be more computationally efficient, thus paving the path toward on-line processing. In this paper, we intend to apply a multi-Bernoulli filter to actual CCD imagery of RSOs. The multi-Bernoulli filter can explicitly account for the birth and death of multiple targets in a measurement arc. TBD is achieved via a sequential Monte Carlo implementation. Preliminary results with simulated single-target data indicate that a Bernoulli filter can successfully track and detect objects with measurement SNR as low as 2.4. Although the advent of fast-cadence scientific CMOS sensors have made the automation of faint object detection a realistic goal, it is nonetheless a difficult goal, as measurements arcs in space surveillance are often both short and sparse. FISST methodologies have been applied to the general problem of SSA by many authors, but they generally focus on tracking scenarios with long arcs or assume that line detection is tractable. We will instead focus this work on estimating sensor-level kinematics of RSOs for low SNR too-short arc observations. Once said estimate is made available, track association and simultaneous initial orbit determination may be achieved via any number of proposed solutions to the too-short arc problem, such as those incorporating the admissible region. We show that the benefit of combining FISST-based TBD with too-short arc association goes both ways; i.e., the former provides consistent statistics regarding bearing-only measurements, whereas the latter makes better use of the precise dynamical models nominally applicable to RSOs in orbit determination.
Tracker: Image-Processing and Object-Tracking System Developed
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Theodore W.
1999-01-01
Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.
Passive RFID Rotation Dimension Reduction via Aggregation
NASA Astrophysics Data System (ADS)
Matthews, Eric
Radio Frequency IDentification (RFID) has applications in object identification, position, and orientation tracking. RFID technology can be applied in hospitals for patient and equipment tracking, stores and warehouses for product tracking, robots for self-localisation, tracking hazardous materials, or locating any other desired object. Efficient and accurate algorithms that perform localisation are required to extract meaningful data beyond simple identification. A Received Signal Strength Indicator (RSSI) is the strength of a received radio frequency signal used to localise passive and active RFID tags. Many factors affect RSSI such as reflections, tag rotation in 3D space, and obstacles blocking line-of-sight. LANDMARC is a statistical method for estimating tag location based on a target tag's similarity to surrounding reference tags. LANDMARC does not take into account the rotation of the target tag. By either aggregating multiple reference tag positions at various rotations, or by determining a rotation value for a newly read tag, we can perform an expected value calculation based on a comparison to the k-most similar training samples via an algorithm called K-Nearest Neighbours (KNN) more accurately. By choosing the average as the aggregation function, we improve the relative accuracy of single-rotation LANDMARC localisation by 10%, and any-rotation localisation by 20%.
Evidence against a speed limit in multiple-object tracking.
Franconeri, S L; Lin, J Y; Pylyshyn, Z W; Fisher, B; Enns, J T
2008-08-01
Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.
1998-07-01
An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.
Yang, Yushi
2015-01-01
Background Eye-tracking technology has been used to measure human cognitive processes and has the potential to improve the usability of health information technology (HIT). However, it is still unclear how the eye-tracking method can be integrated with other traditional usability methodologies to achieve its full potential. Objective The objective of this study was to report on HIT evaluation studies that have used eye-tracker technology, and to envision the potential use of eye-tracking technology in future research. Methods We used four reference databases to initially identify 5248 related papers, which resulted in only 9 articles that met our inclusion criteria. Results Eye-tracking technology was useful in finding usability problems in many ways, but is still in its infancy for HIT usability evaluation. Limited types of HITs have been evaluated by eye trackers, and there has been a lack of evaluation research in natural settings. Conclusions More research should be done in natural settings to discover the real contextual-based usability problems of clinical and mobile HITs using eye-tracking technology with more standardized methodologies and guidance. PMID:27026079
Brain Activation during Spatial Updating and Attentive Tracking of Moving Targets
ERIC Educational Resources Information Center
Jahn, Georg; Wendt, Julia; Lotze, Martin; Papenmeier, Frank; Huff, Markus
2012-01-01
Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed…
An improved KCF tracking algorithm based on multi-feature and multi-scale
NASA Astrophysics Data System (ADS)
Wu, Wei; Wang, Ding; Luo, Xin; Su, Yang; Tian, Weiye
2018-02-01
The purpose of visual tracking is to associate the target object in a continuous video frame. In recent years, the method based on the kernel correlation filter has become the research hotspot. However, the algorithm still has some problems such as video capture equipment fast jitter, tracking scale transformation. In order to improve the ability of scale transformation and feature description, this paper has carried an innovative algorithm based on the multi feature fusion and multi-scale transform. The experimental results show that our method solves the problem that the target model update when is blocked or its scale transforms. The accuracy of the evaluation (OPE) is 77.0%, 75.4% and the success rate is 69.7%, 66.4% on the VOT and OTB datasets. Compared with the optimal one of the existing target-based tracking algorithms, the accuracy of the algorithm is improved by 6.7% and 6.3% respectively. The success rates are improved by 13.7% and 14.2% respectively.
Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model
Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal
2016-01-01
In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
NASA Astrophysics Data System (ADS)
Doko, Tomoko; Chen, Wenbo; Higuchi, Hiroyoshi
2016-06-01
Satellite tracking technology has been used to reveal the migration patterns and flyways of migratory birds. In general, bird migration can be classified according to migration status. These statuses include the wintering period, spring migration, breeding period, and autumn migration. To determine the migration status, periods of these statuses should be individually determined, but there is no objective method to define 'a threshold date' for when an individual bird changes its status. The research objective is to develop an effective and objective method to determine threshold dates of migration status based on satellite-tracked data. The developed method was named the "MATCHED (Migratory Analytical Time Change Easy Detection) method". In order to demonstrate the method, data acquired from satellite-tracked Tundra Swans were used. MATCHED method is composed by six steps: 1) dataset preparation, 2) time frame creation, 3) automatic identification, 4) visualization of change points, 5) interpretation, and 6) manual correction. Accuracy was tested. In general, MATCHED method was proved powerful to identify the change points between migration status as well as stopovers. Nevertheless, identifying "exact" threshold dates is still challenging. Limitation and application of this method was discussed.
Feghali, Rosario; Mitiche, Amar
2004-11-01
The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.
Tracking the impact of depression in a perspective-taking task.
Ferguson, Heather J; Cane, James
2017-11-01
Research has identified impairments in Theory of Mind (ToM) abilities in depressed patients, particularly in relation to tasks involving empathetic responses and belief reasoning. We aimed to build on this research by exploring the relationship between depressed mood and cognitive ToM, specifically visual perspective-taking ability. High and low depressed participants were eye-tracked as they completed a perspective-taking task, in which they followed the instructions of a 'director' to move target objects (e.g. a "teapot with spots on") around a grid, in the presence of a temporarily-ambiguous competitor object (e.g. a "teapot with stars on"). Importantly, some of the objects in the grid were occluded from the director's (but not the participant's) view. Results revealed no group-based difference in participants' ability to use perspective cues to identify the target object. All participants were faster to select the target object when the competitor was only available to the participant, compared to when the competitor was mutually available to the participant and director. Eye-tracking measures supported this pattern, revealing that perspective directed participants' visual search immediately upon hearing the ambiguous object's name (e.g. "teapot"). We discuss how these results fit with previous studies that have shown a negative relationship between depression and ToM.
A Standard-Compliant Virtual Meeting System with Active Video Object Tracking
NASA Astrophysics Data System (ADS)
Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting
2002-12-01
This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers place the first segments of the transportation canister around the base of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
Vision-based overlay of a virtual object into real scene for designing room interior
NASA Astrophysics Data System (ADS)
Harasaki, Shunsuke; Saito, Hideo
2001-10-01
In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).
A novel framework for objective detection and tracking of TC center from noisy satellite imagery
NASA Astrophysics Data System (ADS)
Johnson, Bibin; Thomas, Sachin; Rani, J. Sheeba
2018-07-01
This paper proposes a novel framework for automatically determining and tracking the center of a tropical cyclone (TC) during its entire life-cycle from the Thermal infrared (TIR) channel data of the geostationary satellite. The proposed method handles meteorological images with noise, missing or partial information due to the seasonal variability and lack of significant spatial or vortex features. To retrieve the cyclone center from these circumstances, a synergistic approach based on objective measures and Numerical Weather Prediction (NWP) model is being proposed. This method employs a spatial gradient scheme to process missing and noisy frames or a spatio-temporal gradient scheme for image sequences that are continuous and contain less noise. The initial estimate of the TC center from the missing imagery is corrected by exploiting a NWP model based post-processing scheme. The validity of the framework is tested on Infrared images of different cyclones obtained from various Geostationary satellites such as the Meteosat-7, INSAT- 3 D , Kalpana-1 etc. The computed track is compared with the actual track data obtained from Joint Typhoon Warning Center (JTWC), and it shows a reduction of mean track error by 11 % as compared to the other state of the art methods in the presence of missing and noisy frames. The proposed method is also successfully tested for simultaneous retrieval of the TC center from images containing multiple non-overlapping cyclones.
2006-11-01
Asset tracking systems are used in healthcare to find objects--medical devices and other hospital equipment--and to record the physical location of those objects over time. Interest in asset tracking is growing daily, but the technology is still evolving, and so far very few systems have been implemented in hospitals. This situation is likely to change over the next few years, at which point many hospitals will be faced with choosing a system. We evaluated four asset tracking systems from four suppliers: Agility Healthcare Solutions, Ekahau, Radianse, and Versus Technology. We judged the systems' performance for two "levels" of asset tracking. The first level is basic locating--simply determining where in the facility an item can be found. This may be done because the equipment needs routine inspection and preventive maintenance or because it is required for recall purposes; or the equipment may be needed, often urgently, for clinical use. The second level, which is much more involved, is inventory optimization and workflow improvement. This entails analyzing asset utilization based on historical location data to improve the use, distribution, and processing of equipment. None of the evaluated products is ideal for all uses--each has strengths and weaknesses. In many cases, hospitals will have to select a product based on their specific needs. For example, they may need to choose between a supplier whose system is easy to install and a supplier whose tags have a long battery operating life.
Thermal bioaerosol cloud tracking with Bayesian classification
NASA Astrophysics Data System (ADS)
Smith, Christian W.; Dupuis, Julia R.; Schundler, Elizabeth C.; Marinelli, William J.
2017-05-01
The development of a wide area, bioaerosol early warning capability employing existing uncooled thermal imaging systems used for persistent perimeter surveillance is discussed. The capability exploits thermal imagers with other available data streams including meteorological data and employs a recursive Bayesian classifier to detect, track, and classify observed thermal objects with attributes consistent with a bioaerosol plume. Target detection is achieved based on similarity to a phenomenological model which predicts the scene-dependent thermal signature of bioaerosol plumes. Change detection in thermal sensor data is combined with local meteorological data to locate targets with the appropriate thermal characteristics. Target motion is tracked utilizing a Kalman filter and nearly constant velocity motion model for cloud state estimation. Track management is performed using a logic-based upkeep system, and data association is accomplished using a combinatorial optimization technique. Bioaerosol threat classification is determined using a recursive Bayesian classifier to quantify the threat probability of each tracked object. The classifier can accept additional inputs from visible imagers, acoustic sensors, and point biological sensors to improve classification confidence. This capability was successfully demonstrated for bioaerosol simulant releases during field testing at Dugway Proving Grounds. Standoff detection at a range of 700m was achieved for as little as 500g of anthrax simulant. Developmental test results will be reviewed for a range of simulant releases, and future development and transition plans for the bioaerosol early warning platform will be discussed.
Track-based event recognition in a realistic crowded environment
NASA Astrophysics Data System (ADS)
van Huis, Jasper R.; Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Dijk, Judith; van Rest, Jeroen H.
2014-10-01
Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J
2014-09-26
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.
2014-01-01
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956
Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin
2017-11-01
Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.
Vision-based algorithms for near-host object detection and multilane sensing
NASA Astrophysics Data System (ADS)
Kenue, Surender K.
1995-01-01
Vision-based sensing can be used for lane sensing, adaptive cruise control, collision warning, and driver performance monitoring functions of intelligent vehicles. Current computer vision algorithms are not robust for handling multiple vehicles in highway scenarios. Several new algorithms are proposed for multi-lane sensing, near-host object detection, vehicle cut-in situations, and specifying regions of interest for object tracking. These algorithms were tested successfully on more than 6000 images taken from real-highway scenes under different daytime lighting conditions.
Assessing the performance of a motion tracking system based on optical joint transform correlation
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.
2015-08-01
We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.
A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network
NASA Astrophysics Data System (ADS)
Li, Yiming; Bhanu, Bir
Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.
A similarity retrieval approach for weighted track and ambient field of tropical cyclones
NASA Astrophysics Data System (ADS)
Li, Ying; Xu, Luan; Hu, Bo; Li, Yuejun
2018-03-01
Retrieving historical tropical cyclones (TC) which have similar position and hazard intensity to the objective TC is an important means in TC track forecast and TC disaster assessment. A new similarity retrieval scheme is put forward based on historical TC track data and ambient field data, including ERA-Interim reanalysis and GFS and EC-fine forecast. It takes account of both TC track similarity and ambient field similarity, and optimal weight combination is explored subsequently. Result shows that both the distance and direction errors of TC track forecast at 24-hour timescale follow an approximately U-shape distribution. They tend to be large when the weight assigned to track similarity is close to 0 or 1.0, while relatively small when track similarity weight is from 0.2˜0.7 for distance error and 0.3˜0.6 for direction error.
Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing
2013-01-01
The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.
Object classification for obstacle avoidance
NASA Astrophysics Data System (ADS)
Regensburger, Uwe; Graefe, Volker
1991-03-01
Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.
Tracking Object Existence From an Autonomous Patrol Vehicle
NASA Technical Reports Server (NTRS)
Wolf, Michael; Scharenbroich, Lucas
2011-01-01
An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.
Motion-based prediction explains the role of tracking in motion extrapolation.
Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U
2013-11-01
During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated. Then, during tracking, trajectory estimation is robust to blanks even in the presence of relatively high levels of noise. Moreover, we found that tracking is necessary for motion extrapolation, this calls for further experimental work exploring the role of noise in motion extrapolation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Simulation of Telescope Detectivity for Geo Survey and Tracking
NASA Astrophysics Data System (ADS)
Richard, P.
2014-09-01
As the number of space debris on Earths Orbit increases steadily, the need to survey, track and catalogue them becomes of key importance. In this context, CNES has been using the TAROT Telescopes (Rapid Telescopes for Transient Objects owned and operated by CNRS) for several years to conduct studies about space surveillance and tracking. Today, two testbeds of services using the TAROT telescopes are running every night: one for GEO situational awareness and the second for debris tracking. Additionally to the CNES research activity on space surveillance and tracking domain, an operational collision avoidance service for LEO and GEO satellites is in place at CNES for several years. This service named CAESAR (Conjunction Analysis and Evaluation: Alerts and Recommendations) is used by CNES as well as by external customers. As the optical debris tracking testbed based on TAROT telescopes is the first step toward an operational provider of GEO measures that could be used by CAESAR, simulations have been done to help choosing the sites and types of telescopes that could be added in the GEO survey and debris tracking telescope network. One of the distinctive characteristics of the optical observation of space debris compared to traditional astronomic observation is the need to observe objects at low elevations. The two mains reasons for this are the need to observe the GEO belt from non-equatorial sites and the need to observe debris at longitudes far from the telescope longitude. This paper presents the results of simulations of the detectivity for GEO debris of various telescopes and sites, based on models of the GEO belt, the atmosphere and the instruments. One of the conclusions is that clever detection of faint streaks and spread sources by image processing is one of the major keys to improve the detection of debris on the GEO belt.
The research on the mean shift algorithm for target tracking
NASA Astrophysics Data System (ADS)
CAO, Honghong
2017-06-01
The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.
Ego-Motion and Tracking for Continuous Object Learning: A Brief Survey
2017-09-01
ARL-TR-8167• SEP 2017 US Army Research Laboratory Ego-motion and Tracking for ContinuousObject Learning: A Brief Survey by Jason Owens and Philip...SEP 2017 US Army Research Laboratory Ego-motion and Tracking for ContinuousObject Learning: A Brief Survey by Jason Owens and Philip OsteenVehicle...
Locator-Checker-Scaler Object Tracking Using Spatially Ordered and Weighted Patch Descriptor.
Kim, Han-Ul; Kim, Chang-Su
2017-08-01
In this paper, we propose a simple yet effective object descriptor and a novel tracking algorithm to track a target object accurately. For the object description, we divide the bounding box of a target object into multiple patches and describe them with color and gradient histograms. Then, we determine the foreground weight of each patch to alleviate the impacts of background information in the bounding box. To this end, we perform random walk with restart (RWR) simulation. We then concatenate the weighted patch descriptors to yield the spatially ordered and weighted patch (SOWP) descriptor. For the object tracking, we incorporate the proposed SOWP descriptor into a novel tracking algorithm, which has three components: locator, checker, and scaler (LCS). The locator and the scaler estimate the center location and the size of a target, respectively. The checker determines whether it is safe to adjust the target scale in a current frame. These three components cooperate with one another to achieve robust tracking. Experimental results demonstrate that the proposed LCS tracker achieves excellent performance on recent benchmarks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young-Keun, E-mail: ykkim@handong.edu; Kim, Kyung-Soo
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-basedmore » sensor, the system is expected to be highly robust to sea weather conditions.« less
Obstacle penetrating dynamic radar imaging system
Romero, Carlos E [Livermore, CA; Zumstein, James E [Livermore, CA; Chang, John T [Danville, CA; Leach, Jr Richard R. [Castro Valley, CA
2006-12-12
An obstacle penetrating dynamic radar imaging system for the detection, tracking, and imaging of an individual, animal, or object comprising a multiplicity of low power ultra wideband radar units that produce a set of return radar signals from the individual, animal, or object, and a processing system for said set of return radar signals for detection, tracking, and imaging of the individual, animal, or object. The system provides a radar video system for detecting and tracking an individual, animal, or object by producing a set of return radar signals from the individual, animal, or object with a multiplicity of low power ultra wideband radar units, and processing said set of return radar signals for detecting and tracking of the individual, animal, or object.
Attention Modulates Spatial Precision in Multiple-Object Tracking.
Srivastava, Nisheeth; Vul, Ed
2016-01-01
We present a computational model of multiple-object tracking that makes trial-level predictions about the allocation of visual attention and the effect of this allocation on observers' ability to track multiple objects simultaneously. This model follows the intuition that increased attention to a location increases the spatial resolution of its internal representation. Using a combination of empirical and computational experiments, we demonstrate the existence of a tight coupling between cognitive and perceptual resources in this task: Low-level tracking of objects generates bottom-up predictions of error likelihood, and high-level attention allocation selectively reduces error probabilities in attended locations while increasing it at non-attended locations. Whereas earlier models of multiple-object tracking have predicted the big picture relationship between stimulus complexity and response accuracy, our approach makes accurate predictions of both the macro-scale effect of target number and velocity on tracking difficulty and micro-scale variations in difficulty across individual trials and targets arising from the idiosyncratic within-trial interactions of targets and distractors. Copyright © 2016 Cognitive Science Society, Inc.
Integrity Determination for Image Rendering Vision Navigation
2016-03-01
identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or
Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus
NASA Astrophysics Data System (ADS)
Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.
2014-09-01
There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.
A software-based tool for video motion tracking in the surgical skills assessment landscape.
Ganni, Sandeep; Botden, Sanne M B I; Chmarra, Magdalena; Goossens, Richard H M; Jakimowicz, Jack J
2018-01-16
The use of motion tracking has been proved to provide an objective assessment in surgical skills training. Current systems, however, require the use of additional equipment or specialised laparoscopic instruments and cameras to extract the data. The aim of this study was to determine the possibility of using a software-based solution to extract the data. 6 expert and 23 novice participants performed a basic laparoscopic cholecystectomy procedure in the operating room. The recorded videos were analysed using Kinovea 0.8.15 and the following parameters calculated the path length, average instrument movement and number of sudden or extreme movements. The analysed data showed that experts had significantly shorter path length (median 127 cm vs. 187 cm, p = 0.01), smaller average movements (median 0.40 cm vs. 0.32 cm, p = 0.002) and fewer sudden movements (median 14.00 vs. 21.61, p = 0.001) than their novice counterparts. The use of software-based video motion tracking of laparoscopic cholecystectomy is a simple and viable method enabling objective assessment of surgical performance. It provides clear discrimination between expert and novice performance.
Neumann, M; Breton, E; Cuvillon, L; Pan, L; Lorenz, C H; de Mathelin, M
2012-01-01
In this paper, an original workflow is presented for MR image plane alignment based on tracking in real-time MR images. A test device consisting of two resonant micro-coils and a passive marker is proposed for detection using image-based algorithms. Micro-coils allow for automated initialization of the object detection in dedicated low flip angle projection images; then the passive marker is tracked in clinical real-time MR images, with alternation between two oblique orthogonal image planes along the test device axis; in case the passive marker is lost in real-time images, the workflow is reinitialized. The proposed workflow was designed to minimize dedicated acquisition time to a single dedicated acquisition in the ideal case (no reinitialization required). First experiments have shown promising results for test-device tracking precision, with a mean position error of 0.79 mm and a mean orientation error of 0.24°.
Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2017-05-01
This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness.
Localized Detection of Abandoned Luggage
NASA Astrophysics Data System (ADS)
Chang, Jing-Ying; Liao, Huei-Hung; Chen, Liang-Gee
2010-12-01
Abandoned luggage represents a potential threat to public safety. Identifying objects as luggage, identifying the owners of such objects, and identifying whether owners have left luggage behind are the three main problems requiring solution. This paper proposes two techniques which are "foreground-mask sampling" to detect luggage with arbitrary appearance and "selective tracking" to locate and to track owners based solely on looking only at the neighborhood of the luggage. Experimental results demonstrate that once an owner abandons luggage and leaves the scene, the alarm fires within few seconds. The average processing speed of the approach is 17.37 frames per second, which is sufficient for real world applications.
Serrano-Gotarredona, Rafael; Oster, Matthias; Lichtsteiner, Patrick; Linares-Barranco, Alejandro; Paz-Vicente, Rafael; Gomez-Rodriguez, Francisco; Camunas-Mesa, Luis; Berner, Raphael; Rivas-Perez, Manuel; Delbruck, Tobi; Liu, Shih-Chii; Douglas, Rodney; Hafliger, Philipp; Jimenez-Moreno, Gabriel; Civit Ballcels, Anton; Serrano-Gotarredona, Teresa; Acosta-Jimenez, Antonio J; Linares-Barranco, Bernabé
2009-09-01
This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.
Compressed normalized block difference for object tracking
NASA Astrophysics Data System (ADS)
Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge
2018-04-01
Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.
Multi person detection and tracking based on hierarchical level-set method
NASA Astrophysics Data System (ADS)
Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid
2018-04-01
In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.
An experimental comparison of online object-tracking algorithms
NASA Astrophysics Data System (ADS)
Wang, Qing; Chen, Feng; Xu, Wenli; Yang, Ming-Hsuan
2011-09-01
This paper reviews and evaluates several state-of-the-art online object tracking algorithms. Notwithstanding decades of efforts, object tracking remains a challenging problem due to factors such as illumination, pose, scale, deformation, motion blur, noise, and occlusion. To account for appearance change, most recent tracking algorithms focus on robust object representations and effective state prediction. In this paper, we analyze the components of each tracking method and identify their key roles in dealing with specific challenges, thereby shedding light on how to choose and design algorithms for different situations. We compare state-of-the-art online tracking methods including the IVT,1 VRT,2 FragT,3 BoostT,4 SemiT,5 BeSemiT,6 L1T,7 MILT,8 VTD9 and TLD10 algorithms on numerous challenging sequences, and evaluate them with different performance metrics. The qualitative and quantitative comparative results demonstrate the strength and weakness of these algorithms.
Investigation on microfluidic particles manipulation by holographic 3D tracking strategies
NASA Astrophysics Data System (ADS)
Cacace, Teresa; Paturzo, Melania; Memmolo, Pasquale; Vassalli, Massimo; Fraldi, Massimiliano; Mensitieri, Giuseppe; Ferraro, Pietro
2017-06-01
We demonstrate a 3D holographic tracking method to investigate particles motion in a microfluidic channel while unperturbed while inducing their migration through microfluidic manipulation. Digital holography (DH) in microscopy is a full-field, label-free imaging technique able to provide quantitative phase-contrast. The employed 3D tracking method is articulated in steps. First, the displacements along the optical axis are assessed by numerical refocusing criteria. In particular, an automatic refocusing method to recover the particles axial position is implemented employing a contrast-based refocusing criterion. Then, the transverse position of the in-focus object is evaluated through quantitative phase map segmentation methods and centroid-based 2D tracking strategy. The introduction of DH is thus suggested as a powerful approach for control of particles and biological samples manipulation, as well as a possible aid to precise design and implementation of advanced lab-on-chip microfluidic devices.
The role of "rescue saccades" in tracking objects through occlusions.
Zelinsky, Gregory J; Todor, Andrei
2010-12-29
We hypothesize that our ability to track objects through occlusions is mediated by timely assistance from gaze in the form of "rescue saccades"-eye movements to tracked objects that are in danger of being lost due to impending occlusion. Observers tracked 2-4 target sharks (out of 9) for 20 s as they swam through a rendered 3D underwater scene. Targets were either allowed to enter into occlusions (occlusion trials) or not (no occlusion trials). Tracking accuracy with 2-3 targets was ≥ 92% regardless of target occlusion but dropped to 74% on occlusion trials with four targets (no occlusion trials remained accurate; 83%). This pattern was mirrored in the frequency of rescue saccades. Rescue saccades accompanied approximatlely 50% of the Track 2-3 target occlusions, but only 34% of the Track 4 occlusions. Their frequency also decreased with increasing distance between a target and the nearest other object, suggesting that it is the potential for target confusion that summons a rescue saccade, not occlusion itself. These findings provide evidence for a tracking system that monitors for events that might cause track loss (e.g., occlusions) and requests help from the oculomotor system to resolve these momentary crises. As the number of crises increase with the number of targets, some requests for help go unsatisfied, resulting in degraded tracking.
Multiple Objects Fusion Tracker Using a Matching Network for Adaptively Represented Instance Pairs
Oh, Sang-Il; Kang, Hang-Bong
2017-01-01
Multiple-object tracking is affected by various sources of distortion, such as occlusion, illumination variations and motion changes. Overcoming these distortions by tracking on RGB frames, such as shifting, has limitations because of material distortions caused by RGB frames. To overcome these distortions, we propose a multiple-object fusion tracker (MOFT), which uses a combination of 3D point clouds and corresponding RGB frames. The MOFT uses a matching function initialized on large-scale external sequences to determine which candidates in the current frame match with the target object in the previous frame. After conducting tracking on a few frames, the initialized matching function is fine-tuned according to the appearance models of target objects. The fine-tuning process of the matching function is constructed as a structured form with diverse matching function branches. In general multiple object tracking situations, scale variations for a scene occur depending on the distance between the target objects and the sensors. If the target objects in various scales are equally represented with the same strategy, information losses will occur for any representation of the target objects. In this paper, the output map of the convolutional layer obtained from a pre-trained convolutional neural network is used to adaptively represent instances without information loss. In addition, MOFT fuses the tracking results obtained from each modality at the decision level to compensate the tracking failures of each modality using basic belief assignment, rather than fusing modalities by selectively using the features of each modality. Experimental results indicate that the proposed tracker provides state-of-the-art performance considering multiple objects tracking (MOT) and KITTIbenchmarks. PMID:28420194
NASA Astrophysics Data System (ADS)
Leckebusch, G. C.; Befort, D. J.; Kruschke, T.
2016-12-01
Although only ca. 12% of the global insured losses of natural disasters occurred in Asia, there are two major reasons to be concerned about risks in Asia: a) The fraction of loss events was substantial higher with 39% of which 94% were due to atmospheric processes; b) Asia and especially China, is undergoing quick transitions and especially the insurance market is rapidly growing. In order to allow for the estimation of potential future (loss) impacts in East-Asia, in this study we further developed and applied a feature tracking system based on extreme wind speed occurrences to tropical cyclones, which was originally developed for extra-tropical cyclones (Leckebusch et al., 2008). In principle, wind fields will be identified and tracked once a coherent exceedance of local percentile thresholds is identified. The focus on severe wind impact will allow an objective link between the strength of a cyclone and its potential damages over land. The wind tracking is developed in such a way to be applicable also to course-gridded AOGCM simulation. In the presented configuration the wind tracking algorithm is applied to the Japanese reanalysis (JRA55) and TC Identification is based on 850hPa wind speeds (6h resolution) from 1979 to 2014 over the Western North Pacific region. For validation the IBTrACS Best Track archive version v03r8 is used. Out of all 904 observed tracks, about 62% can be matched to at least one windstorm event identified in JRA55. It is found that the relative amount of matched best tracks increases with the maximum intensity. Thus, a positive matching (hit rate) of above 98% for Violent Typhoons (VTY), above 90% for Very Strong Typhoons (VSTY), about 75% for Typhoons (TY), and still some 50% for less intense TCs (TD, TS, STS) is found. This result is extremely encouraging to apply this technique to AOGCM outputs and to derive information about affected regions and intensity-frequency distributions potentially changed under future climate conditions.
Grouping and trajectory storage in multiple object tracking: impairments due to common item motions.
Suganuma, Mutsumi; Yokosawa, Kazuhiko
2006-01-01
In our natural viewing, we notice that objects change their locations across space and time. However, there has been relatively little consideration of the role of motion information in the construction and maintenance of object representations. We investigated this question in the context of the multiple object tracking (MOT) paradigm, wherein observers must keep track of target objects as they move randomly amid featurally identical distractors. In three experiments, we observed impairments in tracking ability when the motions of the target and distractor items shared particular properties. Specifically, we observed impairments when the target and distractor items were in a chasing relationship or moved in a uniform direction. Surprisingly, tracking ability was impaired by these manipulations even when observers failed to notice them. Our results suggest that differentiable trajectory information is an important factor in successful performance of MOT tasks. More generally, these results suggest that various types of common motion can serve as cues to form more global object representations even in the absence of other grouping cues.
NASA Astrophysics Data System (ADS)
Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin
2013-12-01
We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.
NASA Astrophysics Data System (ADS)
Zimmer, P.; McGraw, J. T.; Ackermann, M. R.
There is considerable interest in the capability to discover and monitor small objects (d 20cm) in geosynchronous (GEO) and near-GEO orbital regimes using small, ground-based optical telescopes (D < 0.5m). The threat of such objects is clear. Small telescopes have an unrivaled cost advantage and, under ideal lighting and sky conditions, have the capability of detecting faint objects. This combination of conditions, however, is relatively rare, making routine and persistent surveillance more challenging. In a truly geostationary orbit, a small object is easy to detect because its apparent rate of motion is nearly zero for a ground-based observer, and signal accumulation occurs as it would for more traditional sidereal-tracked astronomical observations. In this regime, though, small objects are not expected to be in controlled or predictable orbits, thus a range of inclinations and eccentricities is possible. This results in a range of apparent angular rates and directions that must be surveilled. This firmly establishes this task as uncued or blind surveillance. Detections in this case are subject to what is commonly called “trailing loss,” where the signal from the object does not accumulate in a fixed detection element, resulting in far lower sensitivity than for a similar object optimally tracked. We review some of the limits of detecting these objects under less than ideal observing conditions, subject further to the current limitations based on technological and operational realities. We demonstrate progress towards this goal using telescopes much smaller than normally considered viable for this task using novel detection and analysis techniques.
Temporal and Location Based RFID Event Data Management and Processing
NASA Astrophysics Data System (ADS)
Wang, Fusheng; Liu, Peiya
Advance of sensor and RFID technology provides significant new power for humans to sense, understand and manage the world. RFID provides fast data collection with precise identification of objects with unique IDs without line of sight, thus it can be used for identifying, locating, tracking and monitoring physical objects. Despite these benefits, RFID poses many challenges for data processing and management. RFID data are temporal and history oriented, multi-dimensional, and carrying implicit semantics. Moreover, RFID applications are heterogeneous. RFID data management or data warehouse systems need to support generic and expressive data modeling for tracking and monitoring physical objects, and provide automated data interpretation and processing. We develop a powerful temporal and location oriented data model for modeling and queryingRFID data, and a declarative event and rule based framework for automated complex RFID event processing. The approach is general and can be easily adapted for different RFID-enabled applications, thus significantly reduces the cost of RFID data integration.
Improved Space Surveillance Network (SSN) Scheduling using Artificial Intelligence Techniques
NASA Astrophysics Data System (ADS)
Stottler, D.
There are close to 20,000 cataloged manmade objects in space, the large majority of which are not active, functioning satellites. These are tracked by phased array and mechanical radars and ground and space-based optical telescopes, collectively known as the Space Surveillance Network (SSN). A better SSN schedule of observations could, using exactly the same legacy sensor resources, improve space catalog accuracy through more complementary tracking, provide better responsiveness to real-time changes, better track small debris in low earth orbit (LEO) through efficient use of applicable sensors, efficiently track deep space (DS) frequent revisit objects, handle increased numbers of objects and new types of sensors, and take advantage of future improved communication and control to globally optimize the SSN schedule. We have developed a scheduling algorithm that takes as input the space catalog and the associated covariance matrices and produces a globally optimized schedule for each sensor site as to what objects to observe and when. This algorithm is able to schedule more observations with the same sensor resources and have those observations be more complementary, in terms of the precision with which each orbit metric is known, to produce a satellite observation schedule that, when executed, minimizes the covariances across the entire space object catalog. If used operationally, the results would be significantly increased accuracy of the space catalog with fewer lost objects with the same set of sensor resources. This approach inherently can also trade-off fewer high priority tasks against more lower-priority tasks, when there is benefit in doing so. Currently the project has completed a prototyping and feasibility study, using open source data on the SSN's sensors, that showed significant reduction in orbit metric covariances. The algorithm techniques and results will be discussed along with future directions for the research.
Object acquisition and tracking for space-based surveillance
NASA Astrophysics Data System (ADS)
1991-11-01
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-27
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less
2009-09-23
CAPE CANAVERAL, Fla. – Approaching rain clouds at dawn hover over Central Florida's east coast, effectively causing the scrub of the Space Tracking and Surveillance System - Demonstrator spacecraft from Launch Pad 17-B at Cape Canaveral Air Force Station. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 24. Photo credit: NASA/Jack Pfaller
2009-08-27
CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft arrives on Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller
Nonlinear Motion Tracking by Deep Learning Architecture
NASA Astrophysics Data System (ADS)
Verma, Arnav; Samaiya, Devesh; Gupta, Karunesh K.
2018-03-01
In the world of Artificial Intelligence, object motion tracking is one of the major problems. The extensive research is being carried out to track people in crowd. This paper presents a unique technique for nonlinear motion tracking in the absence of prior knowledge of nature of nonlinear path that the object being tracked may follow. We achieve this by first obtaining the centroid of the object and then using the centroid as the current example for a recurrent neural network trained using real-time recurrent learning. We have tweaked the standard algorithm slightly and have accumulated the gradient for few previous iterations instead of using just the current iteration as is the norm. We show that for a single object, such a recurrent neural network is highly capable of approximating the nonlinearity of its path.
NASA Technical Reports Server (NTRS)
Siapkaras, A.
1977-01-01
A computational method to deal with the multidimensional nature of tracking and/or monitoring tasks is developed. Operator centered variables, including the operator's perception of the task, are considered. Matrix ratings are defined based on multidimensional scaling techniques and multivariate analysis. The method consists of two distinct steps: (1) to determine the mathematical space of subjective judgements of a certain individual (or group of evaluators) for a given set of tasks and experimental conditionings; and (2) to relate this space with respect to both the task variables and the objective performance criteria used. Results for a variety of second-order trackings with smoothed noise-driven inputs indicate that: (1) many of the internally perceived task variables form a nonorthogonal set; and (2) the structure of the subjective space varies among groups of individuals according to the degree of familiarity they have with such tasks.
Noise reduction in urban LRT networks by combining track based solutions.
Vogiatzis, Konstantinos; Vanhonacker, Patrick
2016-10-15
The overall objective of the Quiet-Track project is to provide step-changing track based noise mitigation and maintenance schemes for railway rolling noise in LRT (Light Rail Transit) networks. WP 4 in particular focuses on the combination of existing track based solutions to yield a global performance of at least 6dB(A). The validation was carried out using a track section in the network of Athens Metro Line 1 with an existing outside concrete slab track (RHEDA track) where high airborne rolling noise was observed. The procedure for the selection of mitigation measures is based on numerical simulations, combining WRNOISE and IMMI software tools for noise prediction with experimental determination of the required track and vehicle parameters (e.g., rail and wheel roughness). The availability of a detailed rolling noise calculation procedure allows for detailed designing of measures and of ranking individual measures. It achieves this by including the modelling of the wheel/rail source intensity and of the noise propagation with the ability to evaluate the effect of modifications at source level (e.g., grinding, rail dampers, wheel dampers, change in resiliency of wheels and/or rail fixation) and of modifications in the propagation path (absorption at the track base, noise barriers, screening). A relevant combination of existing solutions was selected in the function of the simulation results. Three distinct existing solutions were designed in detail aiming at a high rolling noise attenuation and not affecting the normal operation of the metro system: Action 1: implementation of sound absorbing precast elements (panel type) on the track bed, Action 2: implementation of an absorbing noise barrier with a height of 1.10-1.20m above rail level, and Action 3: installation of rail dampers. The selected solutions were implemented on site and the global performance was measured step by step for comparison with simulations. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
O'Hearn, Kirsten; Hoffman, James E.; Landau, Barbara
2010-01-01
The ability to track moving objects, a crucial skill for mature performance on everyday spatial tasks, has been hypothesized to require a specialized mechanism that may be available in infancy (i.e. indexes). Consistent with the idea of specialization, our previous work showed that object tracking was more impaired than a matched spatial memory…
Assessing Multiple Object Tracking in Young Children Using a Game
ERIC Educational Resources Information Center
Ryokai, Kimiko; Farzin, Faraz; Kaltman, Eric; Niemeyer, Greg
2013-01-01
Visual tracking of multiple objects in a complex scene is a critical survival skill. When we attempt to safely cross a busy street, follow a ball's position during a sporting event, or monitor children in a busy playground, we rely on our brain's capacity to selectively attend to and track the position of specific objects in a dynamic scene. This…
An object-based approach to weather analysis and its applications
NASA Astrophysics Data System (ADS)
Troemel, Silke; Diederich, Malte; Horvath, Akos; Simmer, Clemens; Kumjian, Matthew
2013-04-01
The research group 'Object-based Analysis and SEamless prediction' (OASE) within the Hans Ertel Centre for Weather Research programme (HErZ) pursues an object-based approach to weather analysis. The object-based tracking approach adopts the Lagrange perspective by identifying and following the development of convective events over the course of their lifetime. Prerequisites of the object-based analysis are a high-resolved observational data base and a tracking algorithm. A near real-time radar and satellite remote sensing-driven 3D observation-microphysics composite covering Germany, currently under development, contains gridded observations and estimated microphysical quantities. A 3D scale-space tracking identifies convective rain events in the dual-composite and monitors the development over the course of their lifetime. The OASE-group exploits the object-based approach in several fields of application: (1) For a better understanding and analysis of precipitation processes responsible for extreme weather events, (2) in nowcasting, (3) as a novel approach for validation of meso-γ atmospheric models, and (4) in data assimilation. Results from the different fields of application will be presented. The basic idea of the object-based approach is to identify a small set of radar- and satellite derived descriptors which characterize the temporal development of precipitation systems which constitute the objects. So-called proxies of the precipitation process are e.g. the temporal change of the brightband, vertically extensive columns of enhanced differential reflectivity ZDR or the cloud top temperature and heights identified in the 4D field of ground-based radar reflectivities and satellite retrievals generated by a cell during its life time. They quantify (micro-) physical differences among rain events and relate to the precipitation yield. Analyses on the informative content of ZDR columns as precursor for storm evolution for example will be presented to demonstrate the use of such system-oriented predictors for nowcasting. Columns of differential reflectivity ZDR measured by polarimetric weather radars are prominent signatures associated with thunderstorm updrafts. Since greater vertical velocities can loft larger drops and water-coated ice particles to higher altitudes above the environmental freezing level, the integrated ZDR column above the freezing level increases with increasing updraft intensity. Validation of atmospheric models concerning precipitation representation or prediction is usually confined to comparisons of precipitation fields or their temporal and spatial statistics. A comparison of the rain rates alone, however, does not immediately explain discrepancies between models and observations, because similar rain rates might be produced by different processes. Within the event-based approach for validation of models both observed and modeled rain events are analyzed by means of proxies of the precipitation process. Both sets of descriptors represent the basis for model validation since different leading descriptors - in a statistical sense- hint at process formulations potentially responsible for model failures.
Intelligence-aided multitarget tracking for urban operations - a case study: counter terrorism
NASA Astrophysics Data System (ADS)
Sathyan, T.; Bharadwaj, K.; Sinha, A.; Kirubarajan, T.
2006-05-01
In this paper, we present a framework for tracking multiple mobile targets in an urban environment based on data from multiple sources of information, and for evaluating the threat these targets pose to assets of interest (AOI). The motivating scenario is one where we have to track many targets, each with different (unknown) destinations and/or intents. The tracking algorithm is aided by information about the urban environment (e.g., road maps, buildings, hideouts), and strategic and intelligence data. The tracking algorithm needs to be dynamic in that it has to handle a time-varying number of targets and the ever-changing urban environment depending on the locations of the moving objects and AOI. Our solution uses the variable structure interacting multiple model (VS-IMM) estimator, which has been shown to be effective in tracking targets based on road map information. Intelligence information is represented as target class information and incorporated through a combined likelihood calculation within the VS-IMM estimator. In addition, we develop a model to calculate the probability that a particular target can attack a given AOI. This model for the calculation of the probability of attack is based on the target kinematic and class information. Simulation results are presented to demonstrate the operation of the proposed framework on a representative scenario.
Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.
Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2014-08-01
The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.
NASA Technical Reports Server (NTRS)
Phillips, Veronica J.
2017-01-01
STI is for a fact sheet on the Space Object Query Tool being created by the MDC. When planning launches, NASA must first factor in the tens of thousands of objects already in orbit around the Earth. The number of human-made objects, including nonfunctional spacecraft, abandoned launch vehicle stages, mission-related debris and fragmentation debris orbiting Earth has grown steadily since Sputnik 1 was launched in 1957. Currently, the U.S. Department of Defenses Joint Space Operations Center, or JSpOC, tracks over 15,000 distinct objects and provides data for more than 40,000 objects via its Space-Track program, found at space-track.org.
NASA Astrophysics Data System (ADS)
Warren, Ryan Duwain
Three primary objectives were defined for this work. The first objective was to determine, assess, and compare the performance, heat transfer characteristics, economics, and feasibility of real-world stationary and dual-axis tracking grid-connected photovoltaic (PV) systems in the Upper Midwest. This objective was achieved by installing two grid-connected PV systems with different mounting schemes in central Iowa, implementing extensive data acquisition systems, monitoring operation of the PV systems for one full year, and performing detailed experimental performance and economic studies. The two PV systems that were installed, monitored, and analyzed included a 4.59 kWp roof-mounted stationary system oriented for maximum annual energy production, and a 1.02 kWp pole-mounted actively controlled dual-axis tracking system. The second objective was to demonstrate the actual use and performance of real-world stationary and dual-axis tracking grid-connected PV systems used for building energy generation applications. This objective was achieved by offering the installed PV systems to the public for demonstration purposes and through the development of three computer-based tools: a software interface that has the ability to display real-time and historical performance and meteorological data of both systems side-by-side, a software interface that shows real-time and historical video and photographs of each system, and a calculator that can predict performance and economics of stationary and dual-axis tracking grid-connected PV systems at various locations in the United States. The final objective was to disseminate this work to social, professional, scientific, and academic communities in a way that is applicable, objective, accurate, accessible, and comprehensible. This final objective will be addressed by publishing the results of this work and making the computer-based tools available on a public website (www.energy.iastate.edu/Renewable/solar). Detailed experimental performance analyses were performed for both systems; results were quantified and compared between systems, focusing on measures of solar resource, energy generation, power production, and efficiency. This work also presents heat transfer characteristics of both arrays and quantifies the affects of operating temperature on PV system performance in terms of overall heat transfer coefficients and temperature coefficients for power. To assess potential performance of PV in the Upper Midwest, models were built to predict performance of the PV systems operating at lower temperatures. Economic analyses were performed for both systems focusing on measures of life-cycle cost, payback period, internal rate of return, and average incremental cost of solar energy. The potential economic feasibility of grid-connected stationary PV systems used for building energy generation in the Upper Midwest was assessed under assumptions of higher utility energy costs, lower initial installed costs, and different metering agreements. The annual average daily solar insolation seen by the stationary and dual-axis tracking systems was found to be 4.37 and 5.95 kWh/m2, respectively. In terms of energy generation, the tracking system outperformed the stationary system on annual, monthly, and often daily bases; normalized annual energy generation for the tracking and stationary systems were found to be 1,779 and 1,264 kWh/kWp, respectively. The annual average conversion efficiencies of the tracking and stationary systems were found to be approximately 11 and 10.7 percent, respectively. Annual performance ratio values of the tracking and stationary system were found to be 0.819 and 0.792, respectively. The net present values of both systems under all assumed discount rates were determined to be negative. Further, neither system was found to have a payback period less than the assumed system life of 25 years. The rate-of-return of the stationary and tracking systems were found to be -3.3 and -4.9 percent, respectively. Furthermore, the average incremental cost of energy provided by the stationary and dual-axis tracking systems over their assumed useful life is projected to be 0.31 and 0.37 dollars per kWh, respectively. Results of this study suggest that grid-connected PV systems used for building energy generation in the Upper Midwest are not yet economically feasible when compared to a range of alternative investments; however, PV systems could show feasibility under more favorable economic scenarios. Throughout the year of monitoring, array operating temperatures ranged from -24.7°C (-12.4°F) to 61.7°C (143.1°F) for the stationary system and -23.9 °C (-11°F) to 52.7°C (126.9°F) for the dual-axis tracking system during periods of system operation. The hourly average overall heat transfer coefficients for solar irradiance levels greater than 200 W/m 2 for the stationary and dual-axis tracking systems were found to be 20.8 and 29.4 W/m2°C, respectively. The experimental temperature coefficients for power for the stationary and dual-axis tracking systems at a solar irradiance level of 1,000 W/m2 were -0.30 and -0.38 %/°C, respectively. Simulations of the stationary and dual-axis tracking systems operating at lower temperatures suggest that annual conversion efficiencies could potentially be increased by to up 4.3 and 4.6 percent, respectively.
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.
Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-08-23
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor
Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-01-01
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520
Lloréns, Roberto; Noé, Enrique; Naranjo, Valery; Borrego, Adrián; Latorre, Jorge; Alcañiz, Mariano
2015-01-01
Motion tracking systems are commonly used in virtual reality-based interventions to detect movements in the real world and transfer them to the virtual environment. There are different tracking solutions based on different physical principles, which mainly define their performance parameters. However, special requirements have to be considered for rehabilitation purposes. This paper studies and compares the accuracy and jitter of three tracking solutions (optical, electromagnetic, and skeleton tracking) in a practical scenario and analyzes the subjective perceptions of 19 healthy subjects, 22 stroke survivors, and 14 physical therapists. The optical tracking system provided the best accuracy (1.074 ± 0.417 cm) while the electromagnetic device provided the most inaccurate results (11.027 ± 2.364 cm). However, this tracking solution provided the best jitter values (0.324 ± 0.093 cm), in contrast to the skeleton tracking, which had the worst results (1.522 ± 0.858 cm). Healthy individuals and professionals preferred the skeleton tracking solution rather than the optical and electromagnetic solution (in that order). Individuals with stroke chose the optical solution over the other options. Our results show that subjective perceptions and preferences are far from being constant among different populations, thus suggesting that these considerations, together with the performance parameters, should be also taken into account when designing a rehabilitation system. PMID:25808765
An object detection and tracking system for unmanned surface vehicles
NASA Astrophysics Data System (ADS)
Yang, Jian; Xiao, Yang; Fang, Zhiwen; Zhang, Naiwen; Wang, Li; Li, Tao
2017-10-01
Object detection and tracking are critical parts of unmanned surface vehicles(USV) to achieve automatic obstacle avoidance. Off-the-shelf object detection methods have achieved impressive accuracy in public datasets, though they still meet bottlenecks in practice, such as high time consumption and low detection quality. In this paper, we propose a novel system for USV, which is able to locate the object more accurately while being fast and stable simultaneously. Firstly, we employ Faster R-CNN to acquire several initial raw bounding boxes. Secondly, the image is segmented to a few superpixels. For each initial box, the superpixels inside will be grouped into a whole according to a combination strategy, and a new box is thereafter generated as the circumscribed bounding box of the final superpixel. Thirdly, we utilize KCF to track these objects after several frames, Faster-RCNN is again used to re-detect objects inside tracked boxes to prevent tracking failure as well as remove empty boxes. Finally, we utilize Faster R-CNN to detect objects in the next image, and refine object boxes by repeating the second module of our system. The experimental results demonstrate that our system is fast, robust and accurate, which can be applied to USV in practice.
Method and apparatus for imaging through 3-dimensional tracking of protons
NASA Technical Reports Server (NTRS)
Ryan, James M. (Inventor); Macri, John R. (Inventor); McConnell, Mark L. (Inventor)
2001-01-01
A method and apparatus for creating density images of an object through the 3-dimensional tracking of protons that have passed through the object are provided. More specifically, the 3-dimensional tracking of the protons is accomplished by gathering and analyzing images of the ionization tracks of the protons in a closely packed stack of scintillating fibers.
NASA Astrophysics Data System (ADS)
Scherr, Rachel E.; Harrer, Benedikt W.; Close, Hunter G.; Daane, Abigail R.; DeWater, Lezlie S.; Robertson, Amy D.; Seeley, Lane; Vokos, Stamatis
2016-02-01
Energy is a crosscutting concept in science and features prominently in national science education documents. In the Next Generation Science Standards, the primary conceptual learning goal is for learners to conserve energy as they track the transfers and transformations of energy within, into, or out of the system of interest in complex physical processes. As part of tracking energy transfers among objects, learners should (i) distinguish energy from matter, including recognizing that energy flow does not uniformly align with the movement of matter, and should (ii) identify specific mechanisms by which energy is transferred among objects, such as mechanical work and thermal conduction. As part of tracking energy transformations within objects, learners should (iii) associate specific forms with specific models and indicators (e.g., kinetic energy with speed and/or coordinated motion of molecules, thermal energy with random molecular motion and/or temperature) and (iv) identify specific mechanisms by which energy is converted from one form to another, such as incandescence and metabolism. Eventually, we may hope for learners to be able to optimize systems to maximize some energy transfers and transformations and minimize others, subject to constraints based in both imputed mechanism (e.g., objects must have motion energy in order for gravitational energy to change) and the second law of thermodynamics (e.g., heating is irreversible). We hypothesize that a subsequent goal of energy learning—innovating to meet socially relevant needs—depends crucially on the extent to which these goals have been met.
NASA Astrophysics Data System (ADS)
Utzmann, Jens; Flohrer, Tim; Schildknecht, Thomas; Wagner, Axel; Silha, Jiri; Willemsen, Philip; Teston, Frederic
This paper presents the capabilities of a Space-Based Space Surveillance (SBSS) demonstration mission for Space Surveillance and Tracking (SST) based on a micro-satellite platform. The results have been produced in the frame of ESA’s "Assessment Study for Space Based Space Surveillance Demonstration Mission" performed by the Airbus Defence and Space consortium. Space Surveillance and Tracking is part of Space Situational Awareness (SSA) and covers the detection, tracking and cataloguing of space debris and satellites. Derived SST services comprise a catalogue of these man-made objects, collision warning, detection and characterisation of in-orbit fragmentations, sub-catalogue debris characterisation, etc. The assessment of SBSS in a SST system architecture has shown that both an operational SBSS and also already a well-designed space-based demonstrator can provide substantial performance in terms of surveillance and tracking of beyond-LEO objects. Especially the early deployment of a demonstrator, possible by using standard equipment, could boost initial operating capability and create a self-maintained object catalogue. Furthermore, unique statistical information about small-size LEO debris (mm size) can be collected in-situ. Unlike classical technology demonstration missions, the primary goal is the demonstration and optimisation of the functional elements in a complex end-to-end chain (mission planning, observation strategies, data acquisition, processing and fusion, etc.) until the final products can be offered to the users. Also past and current missions by the US (SBV, SBSS) and Canada (Sapphire, NEOSSat) underline the advantages of space-based space surveillance. The presented SBSS system concept takes the ESA SST System Requirements (derived within the ESA SSA Preparatory Program) into account and aims at fulfilling SST core requirements in a stand-alone manner. Additionally, requirments for detection and characterisation of small-sized LEO debris are considered. The evaluation of the concept has shown that an according solution can be implemented with low technological effort and risk. The paper presents details of the system concept, candidate micro-satellite platforms, the observation strategy and the results of performance simulations for space debris coverage and cataloguing accuracy.
Learning Collaborative Sparse Representation for Grayscale-Thermal Tracking.
Li, Chenglong; Cheng, Hui; Hu, Shiyi; Liu, Xiaobai; Tang, Jin; Lin, Liang
2016-09-27
Integrating multiple different yet complementary feature representations has been proved to be an effective way for boosting tracking performance. This paper investigates how to perform robust object tracking in challenging scenarios by adaptively incorporating information from grayscale and thermal videos, and proposes a novel collaborative algorithm for online tracking. In particular, an adaptive fusion scheme is proposed based on collaborative sparse representation in Bayesian filtering framework. We jointly optimize sparse codes and the reliable weights of different modalities in an online way. In addition, this work contributes a comprehensive video benchmark, which includes 50 grayscale-thermal sequences and their ground truth annotations for tracking purpose. The videos are with high diversity and the annotations were finished by one single person to guarantee consistency. Extensive experiments against other stateof- the-art trackers with both grayscale and grayscale-thermal inputs demonstrate the effectiveness of the proposed tracking approach. Through analyzing quantitative results, we also provide basic insights and potential future research directions in grayscale-thermal tracking.
Real-time acquisition and tracking system with multiple Kalman filters
NASA Astrophysics Data System (ADS)
Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.
1994-07-01
The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.
Li, Mengfei; Hansen, Christian; Rose, Georg
2017-09-01
Electromagnetic tracking systems (EMTS) have achieved a high level of acceptance in clinical settings, e.g., to support tracking of medical instruments in image-guided interventions. However, tracking errors caused by movable metallic medical instruments and electronic devices are a critical problem which prevents the wider application of EMTS for clinical applications. We plan to introduce a method to dynamically reduce tracking errors caused by metallic objects in proximity to the magnetic sensor coil of the EMTS. We propose a method using ramp waveform excitation based on modeling the conductive distorter as a resistance-inductance circuit. Additionally, a fast data acquisition method is presented to speed up the refresh rate. With the current approach, the sensor's positioning mean error is estimated to be 3.4, 1.3 and 0.7 mm, corresponding to a distance between the sensor and center of the transmitter coils' array of up to 200, 150 and 100 mm, respectively. The sensor pose error caused by different medical instruments placed in proximity was reduced by the proposed method to a level lower than 0.5 mm in position and [Formula: see text] in orientation. By applying the newly developed fast data acquisition method, we achieved a system refresh rate up to approximately 12.7 frames per second. Our software-based approach can be integrated into existing medical EMTS seamlessly with no change in hardware. It improves the tracking accuracy of clinical EMTS when there is a metallic object placed near the sensor coil and has the potential to improve the safety and outcome of image-guided interventions.
A-Track: Detecting Moving Objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2017-04-01
A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.
Autonomous Space Object Catalogue Construction and Upkeep Using Sensor Control Theory
NASA Astrophysics Data System (ADS)
Moretti, N.; Rutten, M.; Bessell, T.; Morreale, B.
The capability to track objects in space is critical to safeguard domestic and international space assets. Infrequent measurement opportunities, complex dynamics and partial observability of orbital state makes the tracking of resident space objects nontrivial. It is not uncommon for human operators to intervene with space tracking systems, particularly in scheduling sensors. This paper details the development of a system that maintains a catalogue of geostationary objects through dynamically tasking sensors in real time by managing the uncertainty of object states. As the number of objects in space grows the potential for collision grows exponentially. Being able to provide accurate assessment to operators regarding costly collision avoidance manoeuvres is paramount; the accuracy of which is highly dependent on how object states are estimated. The system represents object state and uncertainty using particles and utilises a particle filter for state estimation. Particle filters capture the model and measurement uncertainty accurately, allowing for a more comprehensive representation of the state’s probability density function. Additionally, the number of objects in space is growing disproportionally to the number of sensors used to track them. Maintaining precise positions for all objects places large loads on sensors, limiting the time available to search for new objects or track high priority objects. Rather than precisely track all objects our system manages the uncertainty in orbital state for each object independently. The uncertainty is allowed to grow and sensor data is only requested when the uncertainty must be reduced. For example when object uncertainties overlap leading to data association issues or if the uncertainty grows to beyond a field of view. These control laws are formulated into a cost function, which is optimised in real time to task sensors. By controlling an optical telescope the system has been able to construct and maintain a catalogue of approximately 100 geostationary objects.
Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan
2014-10-01
It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.
Short- and medium-range 3D sensing for space applications
NASA Astrophysics Data System (ADS)
Beraldin, J. A.; Blais, Francois; Rioux, Marc; Cournoyer, Luc; Laurin, Denis G.; MacLean, Steve G.
1997-07-01
This paper focuses on the characteristics and performance of a laser range scanner (LARS) with short and medium range 3D sensing capabilities for space applications. This versatile laser range scanner is a precision measurement tool intended to complement the current Canadian Space Vision System (CSVS). Together, these vision systems are intended to be used during the construction of the International Space Station (ISS). Integration of the LARS to the CSVS will allow 3D surveying of a robotic work-site, identification of known objects from registered range and intensity images, and object detection and tracking relative to the orbiter and ISS. The data supplied by the improved CSVS will be invaluable in Orbiter rendez-vous and in assisting the Orbiter/ISS Remote Manipulator System operators. The major advantages of the LARS over conventional video-based imaging are its ability to operate with sunlight shining directly into the scanner and its immunity to spurious reflections and shadows which occur frequently in space. Because the LARS is equipped with two high-speed galvanometers to steer the laser beam, any spatial location within the field of view of the camera can be addressed. This level of versatility enables the LARS to operate in two basic scan pattern modes: (1) variable scan resolution mode and (2) raster scan mode. In the variable resolution mode, the LARS can search and track targets and geometrical features on objects located within a field of view of 30 degrees X 30 degrees and with corresponding range from about 0.5 m to 2000 m. This flexibility allows implementations of practical search and track strategies based on the use of Lissajous patterns for multiple targets. The tracking mode can reach a refresh rate of up to 137 Hz. The raster mode is used primarily for the measurement of registered range and intensity information of large stationary objects. It allows among other things: target-based measurements, feature-based measurements, and, image-based measurements like differential inspection in 3D space and surface reflectance monitoring. The digitizing and modeling of human subjects, cargo payloads, and environments are also possible with the LARS. A number of examples illustrating the many capabilities of the LARS are presented in this paper.
Li, Yuankun; Xu, Tingfa; Deng, Honggao; Shi, Guokai; Guo, Jie
2018-02-23
Although correlation filter (CF)-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN) to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.
Architectural Design for European SST System
NASA Astrophysics Data System (ADS)
Utzmann, Jens; Wagner, Axel; Blanchet, Guillaume; Assemat, Francois; Vial, Sophie; Dehecq, Bernard; Fernandez Sanchez, Jaime; Garcia Espinosa, Jose Ramon; Agueda Mate, Alberto; Bartsch, Guido; Schildknecht, Thomas; Lindman, Niklas; Fletcher, Emmet; Martin, Luis; Moulin, Serge
2013-08-01
The paper presents the results of a detailed design, evaluation and trade-off of a potential European Space Surveillance and Tracking (SST) system architecture. The results have been produced in study phase 1 of the on-going "CO-II SSA Architectural Design" project performed by the Astrium consortium as part of ESA's Space Situational Awareness Programme and are the baseline for further detailing and consolidation in study phase 2. The sensor network is comprised of both ground- and space-based assets and aims at being fully compliant with the ESA SST System Requirements. The proposed ground sensors include a surveillance radar, an optical surveillance system and a tracking network (radar and optical). A space-based telescope system provides significant performance and robustness for the surveillance and tracking of beyond-LEO target objects.
Tricarico, Christopher; Peters, Robert; Som, Avik; Javaherian, Kavon
2017-01-01
Background Medication adherence remains a difficult problem to both assess and improve in patients. It is a multifactorial problem that goes beyond the commonly cited reason of forgetfulness. To date, eHealth (also known as mHealth and telehealth) interventions to improve medication adherence have largely been successful in improving adherence. However, interventions to date have used time- and cost-intensive strategies or focused solely on medication reminding, leaving much room for improvement in using a modality as flexible as eHealth. Objective Our objective was to develop and implement a fully automated short message service (SMS)-based medication adherence system, EpxMedTracking, that reminds patients to take their medications, explores reasons for missed doses, and alerts providers to help address problems of medication adherence in real time. Methods EpxMedTracking is a fully automated bidirectional SMS-based messaging system with provider involvement that was developed and implemented through Epharmix, Inc. Researchers analyzed 11 weeks of de-identified data from patients cared for by multiple provider groups in routine community practice for feasibility and functionality. Patients included were those in the care of a provider purchasing the EpxMedTracking tool from Epharmix and were enrolled from a clinic by their providers. The primary outcomes assessed were the rate of engagement with the system, reasons for missing doses, and self-reported medication adherence. Results Of the 25 patients studied over the 11 weeks, 3 never responded and subsequently opted out or were deleted by their provider. No other patients opted out or were deleted during the study period. Across the 11 weeks of the study period, the overall weekly engagement rate was 85.9%. There were 109 total reported missed doses including “I forgot” at 33 events (30.3%), “I felt better” at 29 events (26.6%), “out of meds” at 20 events (18.4%), “I felt sick” at 19 events (17.4%), and “other” at 3 events (2.8%). We also noted an increase in self-reported medication adherence in patients using the EpxMedTracking system. Conclusions EpxMedTracking is an effective tool for tracking self-reported medication adherence over time. It uniquely identifies actionable reasons for missing doses for subsequent provider intervention in real time based on patient feedback. Patients enrolled on EpxMedTracking also self-report higher rates of medication adherence over time while on the system. PMID:28506954
Liang, Zhibing; Liu, Fuxian; Gao, Jiale
2018-01-01
For non-ellipsoidal extended targets and group targets tracking (NETT and NGTT), using an ellipsoid to approximate the target extension may not be accurate enough because of the lack of shape and orientation information. In consideration of this, we model a non-ellipsoidal extended target or target group as a combination of multiple ellipsoidal sub-objects, each represented by a random matrix. Based on these models, an improved gamma Gaussian inverse Wishart probability hypothesis density (GGIW-PHD) filter is proposed to estimate the measurement rates, kinematic states, and extension states of the sub-objects for each extended target or target group. For maneuvering NETT and NGTT, a multi-model (MM) approach based GGIW-PHD (MM-GGIW-PHD) filter is proposed. The common and the individual dynamics of the sub-objects belonging to the same extended target or target group are described by means of the combination between the overall maneuver model and the sub-object models. For the merging of updating components, an improved merging criterion and a new merging method are derived. A specific implementation of prediction partition with pseudo-likelihood method is presented. Two scenarios for non-maneuvering and maneuvering NETT and NGTT are simulated. The results demonstrate the effectiveness of the proposed algorithms.
Liu, Fuxian; Gao, Jiale
2018-01-01
For non-ellipsoidal extended targets and group targets tracking (NETT and NGTT), using an ellipsoid to approximate the target extension may not be accurate enough because of the lack of shape and orientation information. In consideration of this, we model a non-ellipsoidal extended target or target group as a combination of multiple ellipsoidal sub-objects, each represented by a random matrix. Based on these models, an improved gamma Gaussian inverse Wishart probability hypothesis density (GGIW-PHD) filter is proposed to estimate the measurement rates, kinematic states, and extension states of the sub-objects for each extended target or target group. For maneuvering NETT and NGTT, a multi-model (MM) approach based GGIW-PHD (MM-GGIW-PHD) filter is proposed. The common and the individual dynamics of the sub-objects belonging to the same extended target or target group are described by means of the combination between the overall maneuver model and the sub-object models. For the merging of updating components, an improved merging criterion and a new merging method are derived. A specific implementation of prediction partition with pseudo-likelihood method is presented. Two scenarios for non-maneuvering and maneuvering NETT and NGTT are simulated. The results demonstrate the effectiveness of the proposed algorithms. PMID:29444144
The Kinect as an interventional tracking system
NASA Astrophysics Data System (ADS)
Wang, Xiang L.; Stolka, Philipp J.; Boctor, Emad; Hager, Gregory; Choti, Michael
2012-02-01
This work explores the suitability of low-cost sensors for "serious" medical applications, such as tracking of interventional tools in the OR, for simulation, and for education. Although such tracking - i.e. the acquisition of pose data e.g. for ultrasound probes, tissue manipulation tools, needles, but also tissue, bone etc. - is well established, it relies mostly on external devices such as optical or electromagnetic trackers, both of which mandate the use of special markers or sensors attached to each single entity whose pose is to be recorded, and also require their calibration to the tracked entity, i.e. the determination of the geometric relationship between the marker's and the object's intrinsic coordinate frames. The Microsoft Kinect sensor is a recently introduced device for full-body tracking in the gaming market, but it was quickly hacked - due to its wide range of tightly integrated sensors (RGB camera, IR depth and greyscale camera, microphones, accelerometers, and basic actuation) - and used beyond this area. As its field of view and its accuracy are within reasonable usability limits, we describe a medical needle-tracking system for interventional applications based on the Kinect sensor, standard biopsy needles, and no necessary attachments, thus saving both cost and time. Its twin cameras are used as a stereo pair to detect needle-shaped objects, reconstruct their pose in four degrees of freedom, and provide information about the most likely candidate.
Modifying the ECC-based grouping-proof RFID system to increase inpatient medication safety.
Ko, Wen-Tsai; Chiou, Shin-Yan; Lu, Erl-Huei; Chang, Henry Ker-Chang
2014-09-01
RFID technology is increasingly used in applications that require tracking, identification, and authentication. It attaches RFID-readable tags to objects for identification and execution of specific RFID-enabled applications. Recently, research has focused on the use of grouping-proofs for preserving privacy in RFID applications, wherein a proof of two or more tags must be simultaneously scanned. In 2010, a privacy-preserving grouping proof protocol for RFID based on ECC in public-key cryptosystem was proposed but was shown to be vulnerable to tracking attacks. A proposed enhancement protocol was also shown to have defects which prevented proper execution. In 2012, Lin et al. proposed a more efficient RFID ECC-based grouping proof protocol to promote inpatient medication safety. However, we found this protocol is also vulnerable to tracking and impersonation attacks. We then propose a secure privacy-preserving RFID grouping proof protocol for inpatient medication safety and demonstrate its resistance to such attacks.
Machine vision application in animal trajectory tracking.
Koniar, Dušan; Hargaš, Libor; Loncová, Zuzana; Duchoň, František; Beňo, Peter
2016-04-01
This article was motivated by the doctors' demand to make a technical support in pathologies of gastrointestinal tract research [10], which would be based on machine vision tools. Proposed solution should be less expensive alternative to already existing RF (radio frequency) methods. The objective of whole experiment was to evaluate the amount of animal motion dependent on degree of pathology (gastric ulcer). In the theoretical part of the article, several methods of animal trajectory tracking are presented: two differential methods based on background subtraction, the thresholding methods based on global and local threshold and the last method used for animal tracking was the color matching with a chosen template containing a searched spectrum of colors. The methods were tested offline on five video samples. Each sample contained situation with moving guinea pig locked in a cage under various lighting conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Chen, Tao
2018-06-01
To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.
Awareness-based game-theoretic space resource management
NASA Astrophysics Data System (ADS)
Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.
2009-05-01
Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.
A mathematical model for computer image tracking.
Legters, G R; Young, T Y
1982-06-01
A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.
An Incentive-based Online Optimization Framework for Distribution Grids
Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun; ...
2017-10-09
This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less
An Incentive-based Online Optimization Framework for Distribution Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun
This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less
Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong
2011-06-01
Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.
Robust online tracking via adaptive samples selection with saliency detection
NASA Astrophysics Data System (ADS)
Yan, Jia; Chen, Xi; Zhu, QiuPing
2013-12-01
Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness. PMID:28296902
Content Validation of Athletic Therapy Clinical Presentations in Canada
ERIC Educational Resources Information Center
Lafave, Mark R.; Yeo, Michelle; Westbrook, Khatija; Valdez, Dennis; Eubank, Breda; McAllister, Jenelle
2016-01-01
Context: Competency-based education requires strong planning and a vehicle to deliver and track students' progress across their undergraduate programs. Clinical presentations (CPs) are proposed as 1 method to deliver a competency-based curriculum in a Canadian undergraduate athletic therapy program. Objective: Validation of 253 CPs. Setting:…
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santhanam, A; Min, Y; Beron, P
Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter.more » To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.« less
RAPTOR-scan: Identifying and Tracking Objects Through Thousands of Sky Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidoff, Sherri; Wozniak, Przemyslaw
2004-09-28
The RAPTOR-scan system mines data for optical transients associated with gamma-ray bursts and is used to create a catalog for the RAPTOR telescope system. RAPTOR-scan can detect and track individual astronomical objects across data sets containing millions of observed points.Accurately identifying a real object over many optical images (clustering the individual appearances) is necessary in order to analyze object light curves. To achieve this, RAPTOR telescope observations are sent in real time to a database. Each morning, a program based on the DBSCAN algorithm clusters the observations and labels each one with an object identifier. Once clustering is complete, themore » analysis program may be used to query the database and produce light curves, maps of the sky field, or other informative displays.Although RAPTOR-scan was designed for the RAPTOR optical telescope system, it is a general tool designed to identify objects in a collection of astronomical data and facilitate quick data analysis. RAPTOR-scan will be released as free software under the GNU General Public License.« less
Bicycle Guidelines and Crash Rates on Cycle Tracks in the United States
Morency, Patrick; Miranda-Moreno, Luis F.; Willett, Walter C.; Dennerlein, Jack T.
2013-01-01
Objectives. We studied state-adopted bicycle guidelines to determine whether cycle tracks (physically separated, bicycle-exclusive paths adjacent to sidewalks) were recommended, whether they were built, and their crash rate. Methods. We analyzed and compared US bicycle facility guidelines published between 1972 and 1999. We identified 19 cycle tracks in the United States and collected extensive data on cycle track design, usage, and crash history from local communities. We used bicycle counts and crash data to estimate crash rates. Results. A bicycle facility guideline written in 1972 endorsed cycle tracks but American Association of State Highway and Transportation Officials (AASHTO) guidelines (1974–1999) discouraged or did not include cycle tracks and did not cite research about crash rates on cycle tracks. For the 19 US cycle tracks we examined, the overall crash rate was 2.3 (95% confidence interval = 1.7, 3.0) per 1 million bicycle kilometers. Conclusions. AASHTO bicycle guidelines are not explicitly based on rigorous or up-to-date research. Our results show that the risk of bicycle–vehicle crashes is lower on US cycle tracks than published crashes rates on roadways. This study and previous investigations support building cycle tracks. PMID:23678920
Autonomous Flight Safety System - Phase III
NASA Technical Reports Server (NTRS)
2008-01-01
The Autonomous Flight Safety System (AFSS) is a joint KSC and Wallops Flight Facility project that uses tracking and attitude data from onboard Global Positioning System (GPS) and inertial measurement unit (IMU) sensors and configurable rule-based algorithms to make flight termination decisions. AFSS objectives are to increase launch capabilities by permitting launches from locations without range safety infrastructure, reduce costs by eliminating some downrange tracking and communication assets, and reduce the reaction time for flight termination decisions.
Electrically tunable lens speeds up 3D orbital tracking
Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico
2015-01-01
3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037
NASA Technical Reports Server (NTRS)
Agurok, Llya
2013-01-01
The Hyperspectral Imager-Tracker (HIT) is a technique for visualization and tracking of low-contrast, fast-moving objects. The HIT architecture is based on an innovative and only recently developed concept in imaging optics. This innovative architecture will give the Light Prescriptions Innovators (LPI) HIT the possibility of simultaneously collecting the spectral band images (hyperspectral cube), IR images, and to operate with high-light-gathering power and high magnification for multiple fast- moving objects. Adaptive Spectral Filtering algorithms will efficiently increase the contrast of low-contrast scenes. The most hazardous parts of a space mission are the first stage of a launch and the last 10 kilometers of the landing trajectory. In general, a close watch on spacecraft operation is required at distances up to 70 km. Tracking at such distances is usually associated with the use of radar, but its milliradian angular resolution translates to 100- m spatial resolution at 70-km distance. With sufficient power, radar can track a spacecraft as a whole object, but will not provide detail in the case of an accident, particularly for small debris in the onemeter range, which can only be achieved optically. It will be important to track the debris, which could disintegrate further into more debris, all the way to the ground. Such fragmentation could cause ballistic predictions, based on observations using high-resolution but narrow-field optics for only the first few seconds of the event, to be inaccurate. No optical imager architecture exists to satisfy NASA requirements. The HIT was developed for space vehicle tracking, in-flight inspection, and in the case of an accident, a detailed recording of the event. The system is a combination of five subsystems: (1) a roving fovea telescope with a wide 30 field of regard; (2) narrow, high-resolution fovea field optics; (3) a Coude optics system for telescope output beam stabilization; (4) a hyperspectral-mutispectral imaging assembly; and (5) image analysis software with effective adaptive spectral filtering algorithm for real-time contrast enhancement.
NASA Astrophysics Data System (ADS)
Li, Chengcheng; Li, Yuefeng; Wang, Guanglin
2017-07-01
The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.
MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.
Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram
2015-11-01
We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.
Multiple-Object Tracking in Children: The "Catch the Spies" Task
ERIC Educational Resources Information Center
Trick, L.M.; Jaspers-Fayer, F.; Sethi, N.
2005-01-01
Multiple-object tracking involves simultaneously tracking positions of a number of target-items as they move among distractors. The standard version of the task poses special challenges for children, demanding extended concentration and the ability to distinguish targets from identical-looking distractors, and may thus underestimate children's…
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket carrying the Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft leaps into the sky from Launch Pad 17-B at Cape Canaveral Air Force Station. STSS-Demo was launched at 8:20:22 a.m. EDT by NASA for the U.S. Missile Defense Agency. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Sandra Joseph- Kevin O'Connell
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket with Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft leaps from Launch Pad 17-B at Cape Canaveral Air Force Station amid clouds of smoke. STSS-Demo was launched at 8:20:22 a.m. EDT by NASA for the U.S. Missile Defense Agency. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Sandra Joseph- Kevin O'Connell
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket with Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft aboard races into the sky leaving a trail of fire and smoke after liftoff from Launch Pad 17-B at Cape Canaveral Air Force Station. It was launched by NASA for the U.S. Missile Defense Agency at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Alan Ault
2009-09-23
CAPE CANAVERAL, Fla. – The mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station rolls back to reveal the United Launch Alliance Delta II rocket that will launch the Space Tracking and Surveillance System - Demonstrator into orbit. It is being launched by NASA for the Missile Defense System. The hour-long launch window opens at 8 a.m. EDT today. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Dimitri Gerondidakis
2009-08-27
CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft leaves the Astrotech payload processing facility on its way to Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller
Computational tools and lattice design for the PEP-II B-Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Irwin, J.; Nosochkov, Y.
1997-02-01
Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT. {copyright} {ital 1997 American Institute of Physics.}
Computational tools and lattice design for the PEP-II B-Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Yunhai; Irwin, John; Nosochkov, Yuri
1997-02-01
Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT.
2009-09-23
CAPE CANAVERAL, Fla. – The mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station has been rolled back to reveal the United Launch Alliance Delta II rocket ready to launch the Space Tracking and Surveillance System - Demonstrator into orbit. It is being launched by NASA for the Missile Defense System. The hour-long launch window opens at 8 a.m. EDT today. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Dimitri Gerondidakis
2009-09-25
CAPE CANAVERAL, Fla. –The United Launch Alliance Delta II rocket with Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft leaps from Launch Pad 17-B at Cape Canaveral Air Force Station amid clouds of smoke. STSS-Demo was launched at 8:20:22 a.m. EDT by NASA for the U.S. Missile Defense Agency. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Tony Gray-Tim Powers
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket carrying the Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft rises from a mantle of smoke as it lifts off from Launch Pad 17-B at Cape Canaveral Air Force Station. STSS-Demo was launched at 8:20:22 a.m. EDT by NASA for the U.S. Missile Defense Agency. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Sandra Joseph- Kevin O'Connell
2009-09-23
CAPE CANAVERAL, Fla. – The mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station has been rolled back to reveal the United Launch Alliance Delta II rocket that will launch the Space Tracking and Surveillance System - Demonstrator into orbit. It is being launched by NASA for the Missile Defense System. The hour-long launch window opens at 8 a.m. EDT today. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Dimitri Gerondidakis
2009-09-25
CAPE CANAVERAL, Fla. – The Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft lifts off through a cloud of smoke from Launch Pad 17-B at Cape Canaveral Air Force Station aboard a United Launch Alliance Delta II rocket. It was launched by NASA for the U.S. Missile Defense Agency. Launch was at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Alan Ault
2009-09-25
CAPE CANAVERAL, Fla. – Under a cloud-streaked sky, the Space Tracking and Surveillance System – Demonstrator, or STSS-Demo, waits through the countdown to liftoff Launch Pad 17-B at Cape Canaveral Air Force Station aboard a United Launch Alliance Delta II rocket. STSS-Demo is being launched by NASA for the U.S. Missile Defense Agency. Liftoff is at 8:20 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Jack Pfaller
2009-09-25
CAPE CANAVERAL, Fla. – Under a cloud-streaked sky, the Space Tracking and Surveillance System – Demonstrator, or STSS-Demo, waits through the countdown to liftoff Launch Pad 17-B at Cape Canaveral Air Force Station aboard a United Launch Alliance Delta II rocket. STSS-Demo is being launched by NASA for the U.S. Missile Defense Agency. Liftoff was at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Jack Pfaller
2009-09-23
CAPE CANAVERAL, Fla. – On Launch Pad 17-B at Cape Canaveral Air Force Station in Florida, the Space Tracking and Surveillance System - Demonstrator spacecraft is bathed in light under a dark, cloudy sky. Rain over Central Florida's east coast caused the scrub of the launch. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 24. Photo credit: NASA/Jack Pfaller
2009-08-27
CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft is being lifted into the mobile service tower on Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller
2009-09-25
CAPE CANAVERAL, Fla. – The Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft lifts off through a cloud of smoke from Launch Pad 17-B at Cape Canaveral Air Force Station aboard a United Launch Alliance Delta II rocket. It was launched by NASA for the U.S. Missile Defense Agency. Launch was at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Jack Pfaller
2009-09-23
CAPE CANAVERAL, Fla. – On Launch Pad 17-B at Cape Canaveral Air Force Station in Florida, the Space Tracking and Surveillance System Demonstrator spacecraft waits for launch under dark, cloudy sky. Rain over Central Florida's east coast caused the scrub of the launch. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 24. Photo credit: NASA/Jack Pfaller
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-02-21
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-01-01
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635
NASA Astrophysics Data System (ADS)
Befort, Daniel J.; Kruschke, Tim; Leckebusch, Gregor C.
2017-04-01
Tropical Cyclones over East Asia have huge socio-economic impacts due to their strong wind fields and large rainfall amounts. Especially, the most severe events are associated with huge economic losses, e.g. Typhoon Herb in 1996 is related to overall losses exceeding 5 billion US (Munich Re, 2016). In this study, an objective tracking algorithm is applied to JRA55 reanalysis data from 1979 to 2014 over the Western North Pacific. For this purpose, a purely wind based algorithm, formerly used to identify extra-tropical wind storms, has been further developed. The algorithm is based on the exceedance of the local 98th percentile to define strong wind fields in gridded climate data. To be detected as a tropical cyclone candidate, the following criteria must be fulfilled: 1) the wind storm must exist for at least eight 6-hourly time steps and 2) the wind field must exceed a minimum size of 130.000km2 for each time step. The usage of wind information is motivated to focus on damage related events, however, a pre-selection based on the affected region is necessary to remove events of extra-tropical nature. Using IBTrACS Best Tracks for validation, it is found that about 62% of all detected tropical cyclone events in JRA55 reanalysis can be matched to an observed best track. As expected the relative amount of matched tracks increases with the wind intensity of the event, with a hit rate of about 98% for Violent Typhoons, above 90% for Very Strong Typhoons and about 75% for Typhoons. Overall these results are encouraging as the parameters used to detect tropical cyclones in JRA55, e.g. minimum area, are also suitable to detect TCs in most CMIP5 simulations and will thus allow estimates of potential future changes.
Image-based systems for space surveillance: from images to collision avoidance
NASA Astrophysics Data System (ADS)
Pyanet, Marine; Martin, Bernard; Fau, Nicolas; Vial, Sophie; Chalte, Chantal; Beraud, Pascal; Fuss, Philippe; Le Goff, Roland
2011-11-01
In many spatial systems, image is a core technology to fulfil the mission requirements. Depending on the application, the needs and the constraints are different and imaging systems can offer a large variety of configurations in terms of wavelength, resolution, field-of-view, focal length or sensitivity. Adequate image processing algorithms allow the extraction of the needed information and the interpretation of images. As a prime contractor for many major civil or military projects, Astrium ST is very involved in the proposition, development and realization of new image-based techniques and systems for space-related purposes. Among the different applications, space surveillance is a major stake for the future of space transportation. Indeed, studies show that the number of debris in orbit is exponentially growing and the already existing population of small and medium debris is a concrete threat to operational satellites. This paper presents Astrium ST activities regarding space surveillance for space situational awareness (SSA) and space traffic management (STM). Among other possible SSA architectures, the relevance of a ground-based optical station network is investigated. The objective is to detect and track space debris and maintain an exhaustive and accurate catalogue up-to-date in order to assess collision risk for satellites and space vehicles. The system is composed of different type of optical stations dedicated to specific functions (survey, passive tracking, active tracking), distributed around the globe. To support these investigations, two in-house operational breadboards were implemented and are operated for survey and tracking purposes. This paper focuses on Astrium ST end-to-end optical-based survey concept. For the detection of new debris, a network of wide field of view survey stations is considered: those stations are able to detect small objects and associated image processing (detection and tracking) allow a preliminary restitution of their orbit.
Method of center localization for objects containing concentric arcs
NASA Astrophysics Data System (ADS)
Kuznetsova, Elena G.; Shvets, Evgeny A.; Nikolaev, Dmitry P.
2015-02-01
This paper proposes a method for automatic center location of objects containing concentric arcs. The method utilizes structure tensor analysis and voting scheme optimized with Fast Hough Transform. Two applications of the proposed method are considered: (i) wheel tracking in video-based system for automatic vehicle classification and (ii) tree growth rings analysis on a tree cross cut image.
Algorithms for detection of objects in image sequences captured from an airborne imaging system
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak
1995-01-01
This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.
Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments
NASA Technical Reports Server (NTRS)
Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi
1994-01-01
Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.
Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments
NASA Astrophysics Data System (ADS)
Pretto, N.; Poiesi, F.
2017-11-01
We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.
Object-oriented model-driven control
NASA Technical Reports Server (NTRS)
Drysdale, A.; Mcroberts, M.; Sager, J.; Wheeler, R.
1994-01-01
A monitoring and control subsystem architecture has been developed that capitalizes on the use of modeldriven monitoring and predictive control, knowledge-based data representation, and artificial reasoning in an operator support mode. We have developed an object-oriented model of a Controlled Ecological Life Support System (CELSS). The model based on the NASA Kennedy Space Center CELSS breadboard data, tracks carbon, hydrogen, and oxygen, carbodioxide, and water. It estimates and tracks resorce-related parameters such as mass, energy, and manpower measurements such as growing area required for balance. We are developing an interface with the breadboard systems that is compatible with artificial reasoning. Initial work is being done on use of expert systems and user interface development. This paper presents an approach to defining universally applicable CELSS monitor and control issues, and implementing appropriate monitor and control capability for a particular instance: the KSC CELSS Breadboard Facility.
Phenomenal permanence and the development of predictive tracking in infancy.
Bertenthal, Bennett I; Longo, Matthew R; Kenny, Sarah
2007-01-01
The perceived spatiotemporal continuity of objects depends on the way they appear and disappear as they move in the spatial layout. This study investigated whether infants' predictive tracking of a briefly occluded object is sensitive to the manner by which the object disappears and reappears. Five-, 7-, and 9-month-old infants were shown a ball rolling across a visual scene and briefly disappearing via kinetic occlusion, instantaneous disappearance, implosion, or virtual occlusion. Three different measures converged to show that predictive tracking increased with age and that infants were most likely to anticipate the reappearance of the ball following kinetic occlusion. These results suggest that infants' knowledge of the permanence and nonpermanence of objects is embodied in their predictive tracking.
Objective assessment of operator performance during ultrasound-guided procedures.
Tabriz, David M; Street, Mandie; Pilgram, Thomas K; Duncan, James R
2011-09-01
Simulation permits objective assessment of operator performance in a controlled and safe environment. Image-guided procedures often require accurate needle placement, and we designed a system to monitor how ultrasound guidance is used to monitor needle advancement toward a target. The results were correlated with other estimates of operator skill. The simulator consisted of a tissue phantom, ultrasound unit, and electromagnetic tracking system. Operators were asked to guide a needle toward a visible point target. Performance was video-recorded and synchronized with the electromagnetic tracking data. A series of algorithms based on motor control theory and human information processing were used to convert raw tracking data into different performance indices. Scoring algorithms converted the tracking data into efficiency, quality, task difficulty, and targeting scores that were aggregated to create performance indices. After initial feasibility testing, a standardized assessment was developed. Operators (N = 12) with a broad spectrum of skill and experience were enrolled and tested. Overall scores were based on performance during ten simulated procedures. Prior clinical experience was used to independently estimate operator skill. When summed, the performance indices correlated well with estimated skill. Operators with minimal or no prior experience scored markedly lower than experienced operators. The overall score tended to increase according to operator's clinical experience. Operator experience was linked to decreased variation in multiple aspects of performance. The aggregated results of multiple trials provided the best correlation between estimated skill and performance. A metric for the operator's ability to maintain the needle aimed at the target discriminated between operators with different levels of experience. This study used a highly focused task model, standardized assessment, and objective data analysis to assess performance during simulated ultrasound-guided needle placement. The performance indices were closely related to operator experience.
Target tracking and surveillance by fusing stereo and RFID information
NASA Astrophysics Data System (ADS)
Raza, Rana H.; Stockman, George C.
2012-06-01
Ensuring security in high risk areas such as an airport is an important but complex problem. Effectively tracking personnel, containers, and machines is a crucial task. Moreover, security and safety require understanding the interaction of persons and objects. Computer vision (CV) has been a classic tool; however, variable lighting, imaging, and random occlusions present difficulties for real-time surveillance, resulting in erroneous object detection and trajectories. Determining object ID via CV at any instance of time in a crowded area is computationally prohibitive, yet the trajectories of personnel and objects should be known in real time. Radio Frequency Identification (RFID) can be used to reliably identify target objects and can even locate targets at coarse spatial resolution, while CV provides fuzzy features for target ID at finer resolution. Our research demonstrates benefits obtained when most objects are "cooperative" by being RFID tagged. Fusion provides a method to simplify the correspondence problem in 3D space. A surveillance system can query for unique object ID as well as tag ID information, such as target height, texture, shape and color, which can greatly enhance scene analysis. We extend geometry-based tracking so that intermittent information on ID and location can be used in determining a set of trajectories of N targets over T time steps. We show that partial-targetinformation obtained through RFID can reduce computation time (by 99.9% in some cases) and also increase the likelihood of producing correct trajectories. We conclude that real-time decision-making should be possible if the surveillance system can integrate information effectively between the sensor level and activity understanding level.
High resolution imaging of a subsonic projectile using automated mirrors with large aperture
NASA Astrophysics Data System (ADS)
Tateno, Y.; Ishii, M.; Oku, H.
2017-02-01
Visual tracking of high-speed projectiles is required for studying the aerodynamics around the objects. One solution to this problem is a tracking method based on the so-called 1 ms Auto Pan-Tilt (1ms-APT) system that we proposed in previous work, which consists of rotational mirrors and a high-speed image processing system. However, the images obtained with that system did not have high enough resolution to realize detailed measurement of the projectiles because of the size of the mirrors. In this study, we propose a new system consisting of enlarged mirrors for tracking a high-speed projectiles so as to achieve higher-resolution imaging, and we confirmed the effectiveness of the system via an experiment in which a projectile flying at subsonic speed tracked.
Object tracking based on harmony search: comparative study
NASA Astrophysics Data System (ADS)
Gao, Ming-Liang; He, Xiao-Hai; Luo, Dai-Sheng; Yu, Yan-Mei
2012-10-01
Visual tracking can be treated as an optimization problem. A new meta-heuristic optimal algorithm, Harmony Search (HS), was first applied to perform visual tracking by Fourie et al. As the authors point out, many subjects are still required in ongoing research. Our work is a continuation of Fourie's study, with four prominent improved variations of HS, namely Improved Harmony Search (IHS), Global-best Harmony Search (GHS), Self-adaptive Harmony Search (SHS) and Differential Harmony Search (DHS) adopted into the tracking system. Their performances are tested and analyzed on multiple challenging video sequences. Experimental results show that IHS is best, with DHS ranking second among the four improved trackers when the iteration number is small. However, the differences between all four reduced gradually, along with the increasing number of iterations.
NASA Technical Reports Server (NTRS)
Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.
1975-01-01
Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.
Dynamics and control of robot for capturing objects in space
NASA Astrophysics Data System (ADS)
Huang, Panfeng
Space robots are expected to perform intricate tasks in future space services, such as satellite maintenance, refueling, and replacing the orbital replacement unit (ORU). To realize these missions, the capturing operation may not be avoided. Such operations will encounter some challenges because space robots have some unique characteristics unfound on ground-based robots, such as, dynamic singularities, dynamic coupling between manipulator and space base, limited energy supply and working without a fixed base, and so on. In addition, since contacts and impacts may not be avoided during capturing operation. Therefore, dynamics and control problems of space robot for capturing objects are significant research topics if the robots are to be deployed for the space services. A typical servicing operation mainly includes three phases: capturing the object, berthing and docking the object, then repairing the target. Therefore, this thesis will focus on resolving some challenging problems during capturing the object, berthing and docking, and so on. In this thesis, I study and analyze the dynamics and control problems of space robot for capturing objects. This work has potential impact in space robotic applications. I first study the contact and impact dynamics of space robot and objects. I specifically focus on analyzing the impact dynamics and mapping the relationship of influence and speed. Then, I develop the fundamental theory for planning the minimum-collision based trajectory of space robot and designing the configuration of space robot at the moment of capture. To compensate for the attitude of the space base during the capturing approach operation, a new balance control concept which can effectively balance the attitude of the space base using the dynamic couplings is developed. The developed balance control concept helps to understand of the nature of space dynamic coupling, and can be readily applied to compensate or minimize the disturbance to the space base. After capturing the object, the space robot must complete the following two tasks: one is to berth the object, and the other is to re-orientate the attitude of the whole robot system for communication and power supply. Therefore, I propose a method to accomplish these two tasks simultaneously using manipulator motion only. The ultimate goal of space services is to realize the capture and manipulation autonomously. Therefore, I propose an affective approach based on learning human skill to track and capture the objects automatically in space. With human-teaching demonstration, the space robot is able to learn and abstract human tracking and capturing skill using an efficient neural-network learning architecture that combines flexible Cascade Neural Networks with Node Decoupled Extended Kalman Filtering (CNN-NDEKF). The simulation results attest that this approach is useful and feasible in tracking trajectory planning and capturing of space robot. Finally I propose a novel approach based on Genetic Algorithms (GAs) to optimize the approach trajectory of space robots in order to realize effective and stable operations. I complete the minimum-torque path planning in order to save the limited energy in space, and design the minimum jerk trajectory for the stabilization of the space manipulator and its space base. These optimal algorithms are very important and useful for the application of space robot.
Real-time Human Activity Recognition
NASA Astrophysics Data System (ADS)
Albukhary, N.; Mustafah, Y. M.
2017-11-01
The traditional Closed-circuit Television (CCTV) system requires human to monitor the CCTV for 24/7 which is inefficient and costly. Therefore, there’s a need for a system which can recognize human activity effectively in real-time. This paper concentrates on recognizing simple activity such as walking, running, sitting, standing and landing by using image processing techniques. Firstly, object detection is done by using background subtraction to detect moving object. Then, object tracking and object classification are constructed so that different person can be differentiated by using feature detection. Geometrical attributes of tracked object, which are centroid and aspect ratio of identified tracked are manipulated so that simple activity can be detected.
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
Grasping rigid objects in zero-g
NASA Astrophysics Data System (ADS)
Anderson, Greg D.
1993-12-01
The extra vehicular activity helper/retriever (EVAHR) is a prototype for an autonomous free- flying robotic astronaut helper. The ability to grasp a moving object is a fundamental skill required for any autonomous free-flyer. This paper discusses an algorithm that couples resolved acceleration control with potential field based obstacle avoidance to enable a manipulator to track and capture a rigid object in (imperfect) zero-g while avoiding joint limits, singular configurations, and unintentional impacts between the manipulator and the environment.
ERIC Educational Resources Information Center
Keane, Brian P.; Mettler, Everett; Tsoi, Vicky; Kellman, Philip J.
2011-01-01
Multiple object tracking (MOT) is an attentional task wherein observers attempt to track multiple targets among moving distractors. Contour interpolation is a perceptual process that fills-in nonvisible edges on the basis of how surrounding edges (inducers) are spatiotemporally related. In five experiments, we explored the automaticity of…
Object tracking via background subtraction for monitoring illegal activity in crossroad
NASA Astrophysics Data System (ADS)
Ghimire, Deepak; Jeong, Sunghwan; Park, Sang Hyun; Lee, Joonwhoan
2016-07-01
In the field of intelligent transportation system a great number of vision-based techniques have been proposed to prevent pedestrians from being hit by vehicles. This paper presents a system that can perform pedestrian and vehicle detection and monitoring of illegal activity in zebra crossings. In zebra crossing, according to the traffic light status, to fully avoid a collision, a driver or pedestrian should be warned earlier if they possess any illegal moves. In this research, at first, we detect the traffic light status of pedestrian and monitor the crossroad for vehicle pedestrian moves. The background subtraction based object detection and tracking is performed to detect pedestrian and vehicles in crossroads. Shadow removal, blob segmentation, trajectory analysis etc. are used to improve the object detection and classification performance. We demonstrate the experiment in several video sequences which are recorded in different time and environment such as day time and night time, sunny and raining environment. Our experimental results show that such simple and efficient technique can be used successfully as a traffic surveillance system to prevent accidents in zebra crossings.
Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S
2015-02-09
A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.
Gaia-GBOT asteroid finding programme (gbot.obspm.fr)
NASA Astrophysics Data System (ADS)
Bouquillon, Sébastien; Altmann, Martin; Taris, Francois; Barache, Christophe; Carlucci, Teddy; Tanga, Paolo; Thuillot, William; Marchant, Jon; Steele, Iain; Lister, Tim; Berthier, Jerome; Carry, Benoit; David, Pedro; Cellino, Alberto; Hestroffer, Daniel J.; Andrei, Alexandre Humberto; Smart, Ricky
2016-10-01
The Ground Based Optical Tracking group (GBOT) consists of about ten scientists involved in the Gaia mission by ESA. Its main task is the optical tracking of the Gaia satellite itself [1]. This novel tracking method in addition to radiometric standard ones is necessary to ensure that the Gaia mission goal in terms of astrometric precision level is reached for all objects. This optical tracking is based on daily observations performed throughout the mission by using the optical CCDs of ESO's VST in Chile, of Liverpool Telescope in La Palma and of the two LCOGT's Faulkes Telescopes in Hawaii and Australia. Each night, GBOT attempts to obtain a sequence of frames covering a 20 min total period and close to Gaia meridian transit time. In each sequence, Gaia is seen as a faint moving object (Rmag ~ 21, speed > 1"/min) and its daily astrometric accuracy has to be better than 0.02" to meet the Gaia mission requirements. The GBOT Astrometric Reduction Pipeline (GARP) [2] has been specifically developed to reach this precision.More recently, a secondary task has been assigned to GBOT which consists detecting and analysing Solar System Objects (SSOs) serendipitously recorded in the GBOT data. Indeed, since Gaia oscillates around the Sun-Earth L2 point, the fields of GBOT observations are near the Ecliptic and roughly located opposite to the Sun which is advantageous for SSO observations and studies. In particular, these SSO data can potentially be very useful to help in the determination of their absolute magnitudes, with important applications to the scientific exploitation of the WISE and Gaia missions. For these reasons, an automatic SSO detection system has been created to identify moving objects in GBOT sequences of observations. Since the beginning of 2015, this SSO detection system, added to GARP for performing high precision astrometry for SSOs, is fully operational. To this date, around 9000 asteroids have been detected. The mean delay between the time of observation and the submission of the SSO reduction results to the MPC is less than 12 hours allowing rapid follow up of new objects.[1] Altmann et al. 2014, SPIE, 9149.[2] Bouquillon et al. 2014, SPIE, 9152.
Software for Analyzing Sequences of Flow-Related Images
NASA Technical Reports Server (NTRS)
Klimek, Robert; Wright, Ted
2004-01-01
Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.
NASA Technical Reports Server (NTRS)
Aller, R. O.
1985-01-01
The Tracking and Data Relay Satellite System (TDRSS) represents the principal element of a new space-based tracking and communication network which will support NASA spaceflight missions in low earth orbit. In its complete configuration, the TDRSS network will include a space segment consisting of three highly specialized communication satellites in geosynchronous orbit, a ground segment consisting of an earth terminal, and associated data handling and control facilities. The TDRSS network has the objective to provide communication and data relay services between the earth-orbiting spacecraft and their ground-based mission control and data handling centers. The first TDRSS spacecraft has been now in service for two years. The present paper is concerned with the TDRSS experience from the perspective of the various programmatic and economic considerations which relate to the program.
A Lyapunov-based Approach for Time-Coordinated 3D Path-Following of Multiple Quadrotors in SO(3)
2012-12-10
January 2006. [22] T. Lee, “ Robust adaptive geometric tracking controls on so(3) with an application to the attitude dynamicsof a quadrotor uav,” 2011...in the presence of time-varying communication networks and spatial and temporal constraints. The objective is to enable n Quadrotors to track prede?ned...developing control laws to solve the Time-Coordinated 3D Path-Following task for multiple Quadrotor UAVs in the presence of time-varying communication
NASA Astrophysics Data System (ADS)
Petrochenko, Andrey; Konyakhin, Igor
2017-06-01
In connection with the development of robotics have become increasingly popular variety of three-dimensional reconstruction of the system mapping and image-set received from the optical sensors. The main objective of technical and robot vision is the detection, tracking and classification of objects of the space in which these systems and robots operate [15,16,18]. Two-dimensional images sometimes don't contain sufficient information to address those or other problems: the construction of the map of the surrounding area for a route; object identification, tracking their relative position and movement; selection of objects and their attributes to complement the knowledge base. Three-dimensional reconstruction of the surrounding space allows you to obtain information on the relative positions of objects, their shape, surface texture. Systems, providing training on the basis of three-dimensional reconstruction of the results of the comparison can produce two-dimensional images of three-dimensional model that allows for the recognition of volume objects on flat images. The problem of the relative orientation of industrial robots with the ability to build threedimensional scenes of controlled surfaces is becoming actual nowadays.
Real-time tracking of visually attended objects in virtual environments and its application to LOD.
Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon
2009-01-01
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.
Robust multiperson tracking from a mobile platform.
Ess, Andreas; Leibe, Bastian; Schindler, Konrad; van Gool, Luc
2009-10-01
In this paper, we address the problem of multiperson tracking in busy pedestrian zones using a stereo rig mounted on a mobile platform. The complexity of the problem calls for an integrated solution that extracts as much visual information as possible and combines it through cognitive feedback cycles. We propose such an approach, which jointly estimates camera position, stereo depth, object detection, and tracking. The interplay between those components is represented by a graphical model. Since the model has to incorporate object-object interactions and temporal links to past frames, direct inference is intractable. We, therefore, propose a two-stage procedure: for each frame, we first solve a simplified version of the model (disregarding interactions and temporal continuity) to estimate the scene geometry and an overcomplete set of object detections. Conditioned on these results, we then address object interactions, tracking, and prediction in a second step. The approach is experimentally evaluated on several long and difficult video sequences from busy inner-city locations. Our results show that the proposed integration makes it possible to deliver robust tracking performance in scenes of realistic complexity.
Infrared tag and track technique
Partin, Judy K.; Stone, Mark L.; Slater, John; Davidson, James R.
2007-12-04
A method of covertly tagging an object for later tracking includes providing a material capable of at least one of being applied to the object and being included in the object, which material includes deuterium; and performing at least one of applying the material to the object and including the material in the object in a manner in which in the appearance of the object is not changed, to the naked eye.
A review of vision-based motion analysis in sport.
Barris, Sian; Button, Chris
2008-01-01
Efforts at player motion tracking have traditionally involved a range of data collection techniques from live observation to post-event video analysis where player movement patterns are manually recorded and categorized to determine performance effectiveness. Due to the considerable time required to manually collect and analyse such data, research has tended to focus only on small numbers of players within predefined playing areas. Whilst notational analysis is a convenient, practical and typically inexpensive technique, the validity and reliability of the process can vary depending on a number of factors, including how many observers are used, their experience, and the quality of their viewing perspective. Undoubtedly the application of automated tracking technology to team sports has been hampered because of inadequate video and computational facilities available at sports venues. However, the complex nature of movement inherent to many physical activities also represents a significant hurdle to overcome. Athletes tend to exhibit quick and agile movements, with many unpredictable changes in direction and also frequent collisions with other players. Each of these characteristics of player behaviour violate the assumptions of smooth movement on which computer tracking algorithms are typically based. Systems such as TRAKUS, SoccerMan, TRAKPERFORMANCE, Pfinder and Prozone all provide extrinsic feedback information to coaches and athletes. However, commercial tracking systems still require a fair amount of operator intervention to process the data after capture and are often limited by the restricted capture environments that can be used and the necessity for individuals to wear tracking devices. Whilst some online tracking systems alleviate the requirements of manual tracking, to our knowledge a completely automated system suitable for sports performance is not yet commercially available. Automatic motion tracking has been used successfully in other domains outside of elite sport performance, notably for surveillance in the military and security industry where automatic recognition of moving objects is achievable because identification of the objects is not necessary. The current challenge is to obtain appropriate video sequences that can robustly identify and label people over time, in a cluttered environment containing multiple interacting people. This problem is often compounded by the quality of video capture, the relative size and occlusion frequency of people, and also changes in illumination. Potential applications of an automated motion detection system are offered, such as: planning tactics and strategies; measuring team organisation; providing meaningful kinematic feedback; and objective measures of intervention effectiveness in team sports, which could benefit coaches, players, and sports scientists.
Model-based occluded object recognition using Petri nets
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Hura, Gurdeep S.
1998-09-01
This paper discusses the use of Petri nets to model the process of the object matching between an image and a model under different 2D geometric transformations. This transformation finds its applications in sensor-based robot control, flexible manufacturing system and industrial inspection, etc. A description approach for object structure is presented by its topological structure relation called Point-Line Relation Structure (PLRS). It has been shown how Petri nets can be used to model the matching process, and an optimal or near optimal matching can be obtained by tracking the reachability graph of the net. The experiment result shows that object can be successfully identified and located under 2D transformation such as translations, rotations, scale changes and distortions due to object occluded partially.
76 FR 31968 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-02
.... Proposed Project: SAMHSA SOAR Web-Based Data Form--NEW In 2009 the Substance Abuse and Mental Health... in all states. SOAR's primary objective is to improve the allowance rate for Social Security... Center under SAMHSA's direction developed a web-based data form that case managers can use to track the...
76 FR 51044 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
.... Project: SAMHSA SOAR Web-Based Data Form--NEW In 2009 the Substance Abuse and Mental Health Services... states. SOAR's primary objective is to improve the allowance rate for Social Security Administration (SSA... SAMHSA's direction developed a web-based data form that case managers can use to track the progress of...
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-01-01
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-03-26
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.
Neural coding in barrel cortex during whisker-guided locomotion
Sofroniew, Nicholas James; Vlasov, Yurii A; Hires, Samuel Andrew; Freeman, Jeremy; Svoboda, Karel
2015-01-01
Animals seek out relevant information by moving through a dynamic world, but sensory systems are usually studied under highly constrained and passive conditions that may not probe important dimensions of the neural code. Here, we explored neural coding in the barrel cortex of head-fixed mice that tracked walls with their whiskers in tactile virtual reality. Optogenetic manipulations revealed that barrel cortex plays a role in wall-tracking. Closed-loop optogenetic control of layer 4 neurons can substitute for whisker-object contact to guide behavior resembling wall tracking. We measured neural activity using two-photon calcium imaging and extracellular recordings. Neurons were tuned to the distance between the animal snout and the contralateral wall, with monotonic, unimodal, and multimodal tuning curves. This rich representation of object location in the barrel cortex could not be predicted based on simple stimulus-response relationships involving individual whiskers and likely emerges within cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.12559.001 PMID:26701910
NASA Astrophysics Data System (ADS)
Huang, Xiaomeng; Hu, Chenqi; Huang, Xing; Chu, Yang; Tseng, Yu-heng; Zhang, Guang Jun; Lin, Yanluan
2018-01-01
Mesoscale convective systems (MCSs) are important components of tropical weather systems and the climate system. Long-term data of MCS are of great significance in weather and climate research. Using long-term (1985-2008) global satellite infrared (IR) data, we developed a novel objective automatic tracking algorithm, which combines a Kalman filter (KF) with the conventional area-overlapping method, to generate a comprehensive MCS dataset. The new algorithm can effectively track small and fast-moving MCSs and thus obtain more realistic and complete tracking results than previous studies. A few examples are provided to illustrate the potential application of the dataset with a focus on the diurnal variations of MCSs over land and ocean regions. We find that the MCSs occurring over land tend to initiate in the afternoon with greater intensity, but the oceanic MCSs are more likely to initiate in the early morning with weaker intensity. A double peak in the maximum spatial coverage is noted over the western Pacific, especially over the southwestern Pacific during the austral summer. Oceanic MCSs also persist for approximately 1 h longer than their continental counterparts.
Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I
2014-01-01
Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300
Dynamic Denoising of Tracking Sequences
Michailovich, Oleg; Tannenbaum, Allen
2009-01-01
In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences. Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement “collaborate” in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over “static” approaches, in which the tracking images are enhanced independently of each other. PMID:18482881
Utku, Semih; Özcanhan, Mehmet Hilal; Unluturk, Mehmet Suleyman
2016-04-01
Patient delivery time is no longer considered as the only critical factor, in ambulatory services. Presently, five clinical performance indicators are used to decide patient satisfaction. Unfortunately, the emergency ambulance services in rapidly growing metropolitan areas do not meet current satisfaction expectations; because of human errors in the management of the objects onboard the ambulances. But, human involvement in the information management of emergency interventions can be reduced by electronic tracking of personnel, assets, consumables and drugs (PACD) carried in the ambulances. Electronic tracking needs the support of automation software, which should be integrated to the overall hospital information system. Our work presents a complete solution based on a centralized database supported by radio frequency identification (RFID) and bluetooth low energy (BLE) identification and tracking technologies. Each object in an ambulance is identified and tracked by the best suited technology. The automated identification and tracking reduces manual paper documentation and frees the personnel to better focus on medical activities. The presence and amounts of the PACD are automatically monitored, warning about their depletion, non-presence or maintenance dates. The computerized two way hospital-ambulance communication link provides information sharing and instantaneous feedback for better and faster diagnosis decisions. A fully implemented system is presented, with detailed hardware and software descriptions. The benefits and the clinical outcomes of the proposed system are discussed, which lead to improved personnel efficiency and more effective interventions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Effects of Vocabulary Size on Online Lexical Processing by Preschoolers.
Law, Franzo; Edwards, Jan R
This study was designed to investigate the relationship between vocabulary size and the speed and accuracy of lexical processing in preschoolers between the ages of 30-46 months using an automatic eye tracking task based on the looking-while-listening paradigm (Fernald, Zangl, Portillo, & Marchman, 2008) and mispronunciation paradigm (White & Morgan, 2008). Children's eye gaze patterns were tracked while they looked at two pictures (one familiar object, one unfamiliar object) on a computer screen and simultaneously heard one of three kinds of auditory stimuli: correct pronunciations of the familiar object's name, one-feature mispronunciations of the familiar object's name, or a nonword. The results showed that children with larger expressive vocabularies, relative to children with smaller expressive vocabularies, were more likely to look to a familiar object upon hearing a correct pronunciation and to an unfamiliar object upon hearing a novel word. Results also showed that children with larger expressive vocabularies were more sensitive to mispronunciations; they were more likely to look toward the unfamiliar object rather than the familiar object upon hearing a one-feature mispronunciation of a familiar object-name. These results suggest that children with smaller vocabularies, relative to their larger-vocabulary age peers, are at a disadvantage for learning new words, as well as for processing familiar words.
Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D
2017-12-01
The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.
Advanced Engineering Technology for Measuring Performance.
Rutherford, Drew N; D'Angelo, Anne-Lise D; Law, Katherine E; Pugh, Carla M
2015-08-01
The demand for competency-based assessments in surgical training is growing. Use of advanced engineering technology for clinical skills assessment allows for objective measures of hands-on performance. Clinical performance can be assessed in several ways via quantification of an assessee's hand movements (motion tracking), direction of visual attention (eye tracking), levels of stress (physiologic marker measurements), and location and pressure of palpation (force measurements). Innovations in video recording technology and qualitative analysis tools allow for a combination of observer- and technology-based assessments. Overall the goal is to create better assessments of surgical performance with robust validity evidence. Copyright © 2015 Elsevier Inc. All rights reserved.
Eye Movements during Multiple Object Tracking: Where Do Participants Look?
ERIC Educational Resources Information Center
Fehd, Hilda M.; Seiffert, Adriane E.
2008-01-01
Similar to the eye movements you might make when viewing a sports game, this experiment investigated where participants tend to look while keeping track of multiple objects. While eye movements were recorded, participants tracked either 1 or 3 of 8 red dots that moved randomly within a square box on a black background. Results indicated that…
Another Way of Tracking Moving Objects Using Short Video Clips
ERIC Educational Resources Information Center
Vera, Francisco; Romanque, Cristian
2009-01-01
Physics teachers have long employed video clips to study moving objects in their classrooms and instructional labs. A number of approaches exist, both free and commercial, for tracking the coordinates of a point using video. The main characteristics of the method described in this paper are: it is simple to use; coordinates can be tracked using…
MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems
NASA Astrophysics Data System (ADS)
Kopecky, Ken; Winer, Eliot
2014-06-01
Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazur, Thomas R., E-mail: tmazur@radonc.wustl.edu, E-mail: hli@radonc.wustl.edu; Fischer-Valuck, Benjamin W.; Wang, Yuhe
Purpose: To first demonstrate the viability of applying an image processing technique for tracking regions on low-contrast cine-MR images acquired during image-guided radiation therapy, and then outline a scheme that uses tracking data for optimizing gating results in a patient-specific manner. Methods: A first-generation MR-IGRT system—treating patients since January 2014—integrates a 0.35 T MR scanner into an annular gantry consisting of three independent Co-60 sources. Obtaining adequate frame rates for capturing relevant patient motion across large fields-of-view currently requires coarse in-plane spatial resolution. This study initially (1) investigate the feasibility of rapidly tracking dense pixel correspondences across single, sagittal planemore » images (with both moderate signal-to-noise and spatial resolution) using a matching objective for highly descriptive vectors called scale-invariant feature transform (SIFT) descriptors associated to all pixels that describe intensity gradients in local regions around each pixel. To more accurately track features, (2) harmonic analysis was then applied to all pixel trajectories within a region-of-interest across a short training period. In particular, the procedure adjusts the motion of outlying trajectories whose relative spectral power within a frequency bandwidth consistent with respiration (or another form of periodic motion) does not exceed a threshold value that is manually specified following the training period. To evaluate the tracking reliability after applying this correction, conventional metrics—including Dice similarity coefficients (DSCs), mean tracking errors (MTEs), and Hausdorff distances (HD)—were used to compare target segmentations obtained via tracking to manually delineated segmentations. Upon confirming the viability of this descriptor-based procedure for reliably tracking features, the study (3) outlines a scheme for optimizing gating parameters—including relative target position and a tolerable margin about this position—derived from a probability density function that is constructed using tracking results obtained just prior to treatment. Results: The feasibility of applying the matching objective for SIFT descriptors toward pixel-by-pixel tracking on cine-MR acquisitions was first retrospectively demonstrated for 19 treatments (spanning various sites). Both with and without motion correction based on harmonic analysis, sub-pixel MTEs were obtained. A mean DSC value spanning all patients of 0.916 ± 0.001 was obtained without motion correction, with DSC values exceeding 0.85 for all patients considered. While most patients show accurate tracking without motion correction, harmonic analysis does yield substantial gain in accuracy (defined using HDs) for three particularly challenging subjects. An application of tracking toward a gating optimization procedure was then demonstrated that should allow a physician to balance beam-on time and tissue sparing in a patient-specific manner by tuning several intuitive parameters. Conclusions: Tracking results show high fidelity in assessing intrafractional motion observed on cine-MR acquisitions. Incorporating harmonic analysis during a training period improves the robustness of the tracking for challenging targets. The concomitant gating optimization procedure should allow for physicians to quantitatively assess gating effectiveness quickly just prior to treatment in a patient-specific manner.« less
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is moved toward the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft, at left. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers observe as the SV1-SV2 spacecraft is lifted for weighing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1-SV2 spacecraft is ready to be weighed. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is lifted to be placed on the top of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers observe as the SV1-SV2 spacecraft is lifted for weighing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-27
CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft moves out of the Astrotech payload processing facility. It is being moved to Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers maneuver one of the second-row segments of the transportation canister that will be placed around the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-03
CAPE CANAVERAL, Fla. –At the Astrotech payload processing facility in Titusville, Fla., the SV1 spacecraft is lowered onto the SV2 for mating. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1 spacecraft is lowered onto the SV2 for mating. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1-SV2 spacecraft sits on the rotation stand after weighing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers begin center of gravity testing, weighing and balancing on the SV1-SV2 spacecraft. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is moved toward the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft, at bottom left. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers place the second row of segments of the transportation canister around the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1 spacecraft is lowered toward the SV2 for mating. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers attach the upper segment of the transportation canister to the lower segments around the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers check the mating of the SV1 spacecraft onto the SV2. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-09-23
CAPE CANAVERAL, Fla. – The mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station has been rolled back as the countdown proceeds to launch of the United Launch Alliance Delta II rocket with the Space Tracking and Surveillance System - Demonstrator spacecraft aboard. It is being launched by NASA for the Missile Defense System. The hour-long launch window opens at 8 a.m. EDT today. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Dimitri Gerondidakis
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1-SV2 spacecraft is lifted for weighing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1 and SV2 spacecraft are ready for mating for launch. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. The spacecraft is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft is under a protective cover before being encased in the transportation canister. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket with Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft aboard races into the sky leaving a trail of fire and smoke after liftoff from Launch Pad 17-B at Cape Canaveral Air Force Station. It was launched by NASA for the U.S. Missile Defense Agency. Launch was at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Jack Pfaller
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers maneuver one of the second-row segments of the transportation canister that will be placed around the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers prepare to lift the SV1 and mate it to the SV2 spacecraft for the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. The spacecraft is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
Technique for identifying, tracing, or tracking objects in image data
Anderson, Robert J [Albuquerque, NM; Rothganger, Fredrick [Albuquerque, NM
2012-08-28
A technique for computer vision uses a polygon contour to trace an object. The technique includes rendering a polygon contour superimposed over a first frame of image data. The polygon contour is iteratively refined to more accurately trace the object within the first frame after each iteration. The refinement includes computing image energies along lengths of contour lines of the polygon contour and adjusting positions of the contour lines based at least in part on the image energies.
Object width modulates object-based attentional selection.
Nah, Joseph C; Neppi-Modona, Marco; Strother, Lars; Behrmann, Marlene; Shomstein, Sarah
2018-04-24
Visual input typically includes a myriad of objects, some of which are selected for further processing. While these objects vary in shape and size, most evidence supporting object-based guidance of attention is drawn from paradigms employing two identical objects. Importantly, object size is a readily perceived stimulus dimension, and whether it modulates the distribution of attention remains an open question. Across four experiments, the size of the objects in the display was manipulated in a modified version of the two-rectangle paradigm. In Experiment 1, two identical parallel rectangles of two sizes (thin or thick) were presented. Experiments 2-4 employed identical trapezoids (each having a thin and thick end), inverted in orientation. In the experiments, one end of an object was cued and participants performed either a T/L discrimination or a simple target-detection task. Combined results show that, in addition to the standard object-based attentional advantage, there was a further attentional benefit for processing information contained in the thick versus thin end of objects. Additionally, eye-tracking measures demonstrated increased saccade precision towards thick object ends, suggesting that Fitts's Law may play a role in object-based attentional shifts. Taken together, these results suggest that object-based attentional selection is modulated by object width.
Normal aging delays and compromises early multifocal visual attention during object tracking.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-02-01
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
Real-time edge tracking using a tactile sensor
NASA Technical Reports Server (NTRS)
Berger, Alan D.; Volpe, Richard; Khosla, Pradeep K.
1989-01-01
Object recognition through the use of input from multiple sensors is an important aspect of an autonomous manipulation system. In tactile object recognition, it is necessary to determine the location and orientation of object edges and surfaces. A controller is proposed that utilizes a tactile sensor in the feedback loop of a manipulator to track along edges. In the control system, the data from the tactile sensor is first processed to find edges. The parameters of these edges are then used to generate a control signal to a hybrid controller. Theory is presented for tactile edge detection and an edge tracking controller. In addition, experimental verification of the edge tracking controller is presented.
Electrical localization of weakly electric fish using neural networks
NASA Astrophysics Data System (ADS)
Kiar, Greg; Mamatjan, Yasin; Jun, James; Maler, Len; Adler, Andy
2013-04-01
Weakly Electric Fish (WEF) emit an Electric Organ Discharge (EOD), which travels through the surrounding water and enables WEF to locate nearby objects or to communicate between individuals. Previous tracking of WEF has been conducted using infrared (IR) cameras and subsequent image processing. The limitation of visual tracking is its relatively low frame-rate and lack of reliability when visually obstructed. Thus, there is a need for reliable monitoring of WEF location and behaviour. The objective of this study is to provide an alternative and non-invasive means of tracking WEF in real-time using neural networks (NN). This study was carried out in three stages. First stage was to recreate voltage distributions by simulating the WEF using EIDORS and finite element method (FEM) modelling. Second stage was to validate the model using phantom data acquired from an Electrical Impedance Tomography (EIT) based system, including a phantom fish and tank. In the third stage, the measurement data was acquired using a restrained WEF within a tank. We trained the NN based on the voltage distributions for different locations of the WEF. With networks trained on the acquired data, we tracked new locations of the WEF and observed the movement patterns. The results showed a strong correlation between expected and calculated values of WEF position in one dimension, yielding a high spatial resolution within 1 cm and 10 times higher temporal resolution than IR cameras. Thus, the developed approach could be used as a practical method to non-invasively monitor the WEF in real-time.
An algorithm to track laboratory zebrafish shoals.
Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia
2018-05-01
In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Screening for Dyslexia Using Eye Tracking during Reading.
Nilsson Benfatto, Mattias; Öqvist Seimyr, Gustaf; Ygge, Jan; Pansell, Tony; Rydberg, Agneta; Jacobson, Christer
2016-01-01
Dyslexia is a neurodevelopmental reading disability estimated to affect 5-10% of the population. While there is yet no full understanding of the cause of dyslexia, or agreement on its precise definition, it is certain that many individuals suffer persistent problems in learning to read for no apparent reason. Although it is generally agreed that early intervention is the best form of support for children with dyslexia, there is still a lack of efficient and objective means to help identify those at risk during the early years of school. Here we show that it is possible to identify 9-10 year old individuals at risk of persistent reading difficulties by using eye tracking during reading to probe the processes that underlie reading ability. In contrast to current screening methods, which rely on oral or written tests, eye tracking does not depend on the subject to produce some overt verbal response and thus provides a natural means to objectively assess the reading process as it unfolds in real-time. Our study is based on a sample of 97 high-risk subjects with early identified word decoding difficulties and a control group of 88 low-risk subjects. These subjects were selected from a larger population of 2165 school children attending second grade. Using predictive modeling and statistical resampling techniques, we develop classification models from eye tracking records less than one minute in duration and show that the models are able to differentiate high-risk subjects from low-risk subjects with high accuracy. Although dyslexia is fundamentally a language-based learning disability, our results suggest that eye movements in reading can be highly predictive of individual reading ability and that eye tracking can be an efficient means to identify children at risk of long-term reading difficulties.
Color image processing and object tracking workstation
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Paulick, Michael J.
1992-01-01
A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.
Lapierre, Mark; Howe, Piers D. L.; Cropper, Simon J.
2013-01-01
Many tasks involve tracking multiple moving objects, or stimuli. Some require that individuals adapt to changing or unfamiliar conditions to be able to track well. This study explores processes involved in such adaptation through an investigation of the interaction of attention and memory during tracking. Previous research has shown that during tracking, attention operates independently to some degree in the left and right visual hemifields, due to putative anatomical constraints. It has been suggested that the degree of independence is related to the relative dominance of processes of attention versus processes of memory. Here we show that when individuals are trained to track a unique pattern of movement in one hemifield, that learning can be transferred to the opposite hemifield, without any evidence of hemifield independence. However, learning is not influenced by an explicit strategy of memorisation of brief periods of recognisable movement. The findings lend support to a role for implicit memory in overcoming putative anatomical constraints on the dynamic, distributed spatial allocation of attention involved in tracking multiple objects. PMID:24349555
A benchmark for comparison of cell tracking algorithms
Maška, Martin; Ulman, Vladimír; Svoboda, David; Matula, Pavel; Matula, Petr; Ederra, Cristina; Urbiola, Ainhoa; España, Tomás; Venkatesan, Subramanian; Balak, Deepak M.W.; Karas, Pavel; Bolcková, Tereza; Štreitová, Markéta; Carthel, Craig; Coraluppi, Stefano; Harder, Nathalie; Rohr, Karl; Magnusson, Klas E. G.; Jaldén, Joakim; Blau, Helen M.; Dzyubachyk, Oleh; Křížek, Pavel; Hagen, Guy M.; Pastor-Escuredo, David; Jimenez-Carretero, Daniel; Ledesma-Carbayo, Maria J.; Muñoz-Barrutia, Arrate; Meijering, Erik; Kozubek, Michal; Ortiz-de-Solorzano, Carlos
2014-01-01
Motivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this article, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. Results: The main contributions of the challenge include the creation of a comprehensive video dataset repository and the definition of objective measures for comparison and ranking of the algorithms. With this benchmark, six algorithms covering a variety of segmentation and tracking paradigms have been compared and ranked based on their performance on both synthetic and real datasets. Given the diversity of the datasets, we do not declare a single winner of the challenge. Instead, we present and discuss the results for each individual dataset separately. Availability and implementation: The challenge Web site (http://www.codesolorzano.com/celltrackingchallenge) provides access to the training and competition datasets, along with the ground truth of the training videos. It also provides access to Windows and Linux executable files of the evaluation software and most of the algorithms that competed in the challenge. Contact: codesolorzano@unav.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24526711
An intelligent, free-flying robot
NASA Technical Reports Server (NTRS)
Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.
1988-01-01
The ground-based demonstration of EVA Retriever, a voice-supervised, intelligent, free-flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out, (2) searches for and acquires the target, (3) plans and executes a rendezvous while continuously tracking the target, (4) avoids stationary and moving obstacles, (5) reaches for and grapples the target, (6) returns to transfer the object, and (7) returns to base.
An intelligent, free-flying robot
NASA Technical Reports Server (NTRS)
Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.; Phinney, Dale E.
1989-01-01
The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base.
Exhausting Attentional Tracking Resources with a Single Fast-Moving Object
ERIC Educational Resources Information Center
Holcombe, Alex O.; Chen, Wei-Ying
2012-01-01
Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional…
NASA Astrophysics Data System (ADS)
Ladd, D.; Reeves, R.; Rumi, E.; Trethewey, M.; Fortescue, M.; Appleby, G.; Wilkinson, M.; Sherwood, R.; Ash, A.; Cooper, C.; Rayfield, P.
The Science and Technology Facilities Council (STFC), Control Loop Concepts Limited (CL2), Natural Environment Research Council (NERC) and Defence Science and Technology Laboratory (DSTL), have recently participated in a campaign of satellite observations, with both radar and optical sensors, in order to demonstrate an initial network concept that enhances the value of coordinated observations. STFC and CL2 have developed a Space Surveillance and Tracking (SST) server/client architecture to slave one sensor to another. The concept was originated to enable the Chilbolton radar (an S-band radar on a 25 m diameter fully-steerable dish antenna called CASTR – Chilbolton Advanced Satellite Tracking Radar) which does not have an auto-track function to follow an object based on position data streamed from another cueing sensor. The original motivation for this was to enable tracking during re-entry of ATV-5, a highly manoeuvrable ISS re-supply vessel. The architecture has been designed to be extensible and allows the interface of both optical and radar sensors which may be geographically separated. Connectivity between the sensors is TCP/IP over the internet. The data transferred between the sensors is translated into an Earth centred frame of reference to accommodate the difference in location, and time-stamping and filtering are applied to cope with latency. The server can accept connections from multiple clients, and the operator can switch between the different clients. This architecture is inherently robust and will enable graceful degradation should parts of the system be unavailable. A demonstration was conducted in 2016 whereby a small telescope connected to an agile mount (an EO tracker known as COATS - Chilbolton Optical Advanced Tracking System) located 50m away from the radar at Chilbolton, autonomously tracked several objects and fed the look angle data into a client. CASTR, slaved to COATS through the server followed and successfully detected the objects. In 2017, the baseline was extended to 135 km by developing a client for the SLR (satellite laser ranger) telescope at the Space Geodesy Facility, Herstmonceux. Trials have already demonstrated that CASTR can accurately track the object using the position data being fed from the SLR.
Wang, Dandan; Zong, Qun; Tian, Bailing; Shao, Shikai; Zhang, Xiuyun; Zhao, Xinyi
2018-02-01
The distributed finite-time formation tracking control problem for multiple unmanned helicopters is investigated in this paper. The control object is to maintain the positions of follower helicopters in formation with external interferences. The helicopter model is divided into a second order outer-loop subsystem and a second order inner-loop subsystem based on multiple-time scale features. Using radial basis function neural network (RBFNN) technique, we first propose a novel finite-time multivariable neural network disturbance observer (FMNNDO) to estimate the external disturbance and model uncertainty, where the neural network (NN) approximation errors can be dynamically compensated by adaptive law. Next, based on FMNNDO, a distributed finite-time formation tracking controller and a finite-time attitude tracking controller are designed using the nonsingular fast terminal sliding mode (NFTSM) method. In order to estimate the second derivative of the virtual desired attitude signal, a novel finite-time sliding mode integral filter is designed. Finally, Lyapunov analysis and multiple-time scale principle ensure the realization of control goal in finite-time. The effectiveness of the proposed FMNNDO and controllers are then verified by numerical simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Development of ecological indicator guilds for land management
Krzysik, A.J.; Balbach, H.E.; Duda, J.J.; Emlen, J.M.; Freeman, D.C.; Graham, J.H.; Kovacic, D.A.; Smith, L.M.; Zak, J.C.
2005-01-01
Agency land-use must be efficiently and cost-effectively monitored to assess conditions and trends in ecosystem processes and natural resources relevant to mission requirements and legal mandates. Ecological Indicators represent important land management tools for tracking ecological changes and preventing irreversible environmental damage in disturbed landscapes. The overall objective of the research was to develop both individual and integrated sets (i.e., statistically derived guilds) of Ecological Indicators to: quantify habitat conditions and trends, track and monitor ecological changes, provide early warning or threshold detection, and provide guidance for land managers. The derivation of Ecological Indicators was based on statistical criteria, ecosystem relevance, reliability and robustness, economy and ease of use for land managers, multi-scale performance, and stress response criteria. The basis for the development of statistically based Ecological Indicators was the identification of ecosystem metrics that analytically tracked a landscape disturbance gradient.
Comparison of Learning Styles of Pharmacy Students and Faculty Members
Crawford, Stephanie Y.; Alhreish, Suhail K.
2012-01-01
Objectives. To compare dominant learning styles of pharmacy students and faculty members and between faculty members in different tracks. Methods. Gregorc Style Delineator (GSD) and Zubin’s Pharmacists’ Inventory of Learning Styles (PILS) were administered to students and faculty members at an urban, Midwestern college of pharmacy. Results. Based on responses from 299 students (classes of 2008, 2009, and 2010) and 59 faculty members, GSD styles were concrete sequential (48%), abstract sequential (18%), abstract random (13%), concrete random (13%), and multimodal (8%). With PILS, dominant styles were assimilator (47%) and converger (30%). There were no significant differences between faculty members and student learning styles nor across pharmacy student class years (p>0.05). Learning styles differed between men and women across both instruments (p<0.01), and between faculty members in tenure and clinical tracks for the GSD styles (p=0.01). Conclusion. Learning styles differed among respondents based on gender and faculty track. PMID:23275657
Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking
Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun
2017-01-01
Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876
ERIC Educational Resources Information Center
Doumas, Diana M.; Nelson, Kinsey; DeYoung, Amanda; Renteria, Camryn Conrad
2014-01-01
This study evaluated the effectiveness of a web-based personalized feedback program using an objective measure of alcohol-related consequences. Participants were assigned to either the intervention group or an assessment-only control group during university orientation. Sanctions received for campus alcohol policy violations were tracked over the…
GEO Optical Data Association with Concurrent Metric and Photometric Information
NASA Astrophysics Data System (ADS)
Dao, P.; Monet, D.
Data association in a congested area of the GEO belt with occasional visits by non-resident objects can be treated as a Multi-Target-Tracking (MTT) problem. For a stationary sensor surveilling the GEO belt, geosynchronous and near GEO objects are not completely motionless in the earth-fixed frame and can be observed as moving targets. In some clusters, metric or positional information is insufficiently accurate or up-to-date to associate the measurements. In the presence of measurements with uncertain origin, star tracks (residuals) and other sensor artifacts, heuristic techniques based on hard decision assignment do not perform adequately. In the MMT community, Bar-Shalom [2009 Bar-Shalom] was first in introducing the use of measurements to update the state of the target of interest in the tracking filter, e.g. Kalman filter. Following Bar-Shalom’s idea, we use the Probabilistic Data Association Filter (PDAF) but to make use of all information obtainable in the measurement of three-axis-stabilized GEO satellites, we combine photometric with metric measurements to update the filter. Therefore, our technique Concurrent Spatio- Temporal and Brightness (COSTB) has the stand-alone ability of associating a track with its identity –for resident objects. That is possible because the light curve of a stabilized GEO satellite changes minimally from night to night. We exercised COSTB on camera cadence data to associate measurements, correct mistags and detect non-residents in a simulated near real time cadence. Data on GEO clusters were used.
Image-based topology for sensor gridlocking and association
NASA Astrophysics Data System (ADS)
Stanek, Clay J.; Javidi, Bahram; Yanni, Philip
2002-07-01
Correlation engines have been evolving since the implementation of radar. In modern sensor fusion architectures, correlation and gridlock filtering are required to produce common, continuous, and unambiguous tracks of all objects in the surveillance area. The objective is to provide a unified picture of the theatre or area of interest to battlefield decision makers, ultimately enabling them to make better inferences for future action and eliminate fratricide by reducing ambiguities. Here, correlation refers to association, which in this context is track-to-track association. A related process, gridlock filtering or gridlocking, refers to the reduction in navigation errors and sensor misalignment errors so that one sensor's track data can be accurately transformed into another sensor's coordinate system. As platforms gain multiple sensors, the correlation and gridlocking of tracks become significantly more difficult. Much of the existing correlation technology revolves around various interpretations of the generalized Bayesian decision rule: choose the action that minimizes conditional risk. One implementation of this principle equates the risk minimization statement to the comparison of ratios of a priori probability distributions to thresholds. The binary decision problem phrased in terms of likelihood ratios is also known as the famed Neyman-Pearson hypothesis test. Using another restatement of the principle for a symmetric loss function, risk minimization leads to a decision that maximizes the a posteriori probability distribution. Even for deterministic decision rules, situations can arise in correlation where there are ambiguities. For these situations, a common algorithm used is a sparse assignment technique such as the Munkres or JVC algorithm. Furthermore, associated tracks may be combined with the hope of reducing the positional uncertainty of a target or object identified by an existing track from the information of several fused/correlated tracks. Gridlocking is typically accomplished with some type of least-squares algorithm, such as the Kalman filtering technique, which attempts to locate the best bias error vector estimate from a set of correlated/fused track pairs. Here, we will introduce a new approach to this longstanding problem by adapting many of the familiar concepts from pattern recognition, ones certainly familiar to target recognition applications. Furthermore, we will show how this technique can lend itself to specialized processing, such as that available through an optical or hybrid correlator.
The Deployment of Visual Attention
2006-03-01
targets: Evidence for memory-based control of attention. Psychonomic Bulletin & Review , 11(1), 71-76. Torralba, A. (2003). Modeling global scene...S., Fencsik, D. E., Tran, L., & Wolfe, J. M. (in press). How do we track invisible objects? Psychonomic Bulletin & Review . *Horowitz, T. S. (in press
Kalal, Zdenek; Mikolajczyk, Krystian; Matas, Jiri
2012-07-01
This paper investigates long-term tracking of unknown objects in a video stream. The object is defined by its location and extent in a single frame. In every frame that follows, the task is to determine the object's location and extent or indicate that the object is not present. We propose a novel tracking framework (TLD) that explicitly decomposes the long-term tracking task into tracking, learning, and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary. The learning estimates the detector's errors and updates it to avoid these errors in the future. We study how to identify the detector's errors and learn from them. We develop a novel learning method (P-N learning) which estimates the errors by a pair of "experts": (1) P-expert estimates missed detections, and (2) N-expert estimates false alarms. The learning process is modeled as a discrete dynamical system and the conditions under which the learning guarantees improvement are found. We describe our real-time implementation of the TLD framework and the P-N learning. We carry out an extensive quantitative evaluation which shows a significant improvement over state-of-the-art approaches.
Investigation of kinematic features for dismount detection and tracking
NASA Astrophysics Data System (ADS)
Narayanaswami, Ranga; Tyurina, Anastasia; Diel, David; Mehra, Raman K.; Chinn, Janice M.
2012-05-01
With recent changes in threats and methods of warfighting and the use of unmanned aircrafts, ISR (Intelligence, Surveillance and Reconnaissance) activities have become critical to the military's efforts to maintain situational awareness and neutralize the enemy's activities. The identification and tracking of dismounts from surveillance video is an important step in this direction. Our approach combines advanced ultra fast registration techniques to identify moving objects with a classification algorithm based on both static and kinematic features of the objects. Our objective was to push the acceptable resolution beyond the capability of industry standard feature extraction methods such as SIFT (Scale Invariant Feature Transform) based features and inspired by it, SURF (Speeded-Up Robust Feature). Both of these methods utilize single frame images. We exploited the temporal component of the video signal to develop kinematic features. Of particular interest were the easily distinguishable frequencies characteristic of bipedal human versus quadrupedal animal motion. We examine limits of performance, frame rates and resolution required for human, animal and vehicles discrimination. A few seconds of video signal with the acceptable frame rate allow us to lower resolution requirements for individual frames as much as by a factor of five, which translates into the corresponding increase of the acceptable standoff distance between the sensor and the object of interest.
2015-03-27
i.e., temporarily focusing on one object instead of wide area survey) or SOI collection on high interest objects (e.g., unidentified objects ...The Air Force Institute of Technology has spent the last seven years conducting research on orbit identification and object characterization of space... objects through the use of commercial-off-the-shelf hardware systems controlled via custom software routines, referred to simply as TeleTrak. Year
Zangenehpour, Sohail; Strauss, Jillian; Miranda-Moreno, Luis F; Saunier, Nicolas
2016-01-01
Cities in North America have been building bicycle infrastructure, in particular cycle tracks, with the intention of promoting urban cycling and improving cyclist safety. These facilities have been built and expanded but very little research has been done to investigate the safety impacts of cycle tracks, in particular at intersections, where cyclists interact with turning motor-vehicles. Some safety research has looked at injury data and most have reached the conclusion that cycle tracks have positive effects of cyclist safety. The objective of this work is to investigate the safety effects of cycle tracks at signalized intersections using a case-control study. For this purpose, a video-based method is proposed for analyzing the post-encroachment time as a surrogate measure of the severity of the interactions between cyclists and turning vehicles travelling in the same direction. Using the city of Montreal as the case study, a sample of intersections with and without cycle tracks on the right and left sides of the road were carefully selected accounting for intersection geometry and traffic volumes. More than 90h of video were collected from 23 intersections and processed to obtain cyclist and motor-vehicle trajectories and interactions. After cyclist and motor-vehicle interactions were defined, ordered logit models with random effects were developed to evaluate the safety effects of cycle tracks at intersections. Based on the extracted data from the recorded videos, it was found that intersection approaches with cycle tracks on the right are safer than intersection approaches with no cycle track. However, intersections with cycle tracks on the left compared to no cycle tracks seem to be significantly safer. Results also identify that the likelihood of a cyclist being involved in a dangerous interaction increases with increasing turning vehicle flow and decreases as the size of the cyclist group arriving at the intersection increases. The results highlight the important role of cycle tracks and the factors that increase or decrease cyclist safety. Results need however to be confirmed using longer periods of video data. Copyright © 2015 Elsevier Ltd. All rights reserved.
Space surveillance satellite catalog maintenance
NASA Astrophysics Data System (ADS)
Jackson, Phoebe A.
1990-04-01
The United States Space Command (USSPACECOM) is a Unified Command of the Department of Defense with headquarters at Peterson Air Force Base, Colorado Springs, Co. One of the responsibilities of USSPACECOM is to detect, track, identify, and maintain a catalog of all manmade objects in earth orbit. This satellite catalog is the most important tool for space surveillance. The purpose of this paper is threefold. First, to identify why the command does the job of satellite catalog maintenance. Second, to describe what the satellite catalog is and how it is maintained. Third, and finally, to identify the questions that must be addressed if this command is to track small space object debris. This paper's underlying rationale is to describe our catalog maintenance services so that the members of our community can use them with assurance.
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; Macleod, Todd; Gagliano, Larry
2015-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; MacLeod, Todd; Gagliano, Larry
2016-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
The role of visual attention in multiple object tracking: evidence from ERPs.
Doran, Matthew M; Hoffman, James E
2010-01-01
We examined the role of visual attention in the multiple object tracking (MOT) task by measuring the amplitude of the N1 component of the event-related potential (ERP) to probe flashes presented on targets, distractors, or empty background areas. We found evidence that visual attention enhances targets and suppresses distractors (Experiment 1 & 3). However, we also found that when tracking load was light (two targets and two distractors), accurate tracking could be carried out without any apparent contribution from the visual attention system (Experiment 2). Our results suggest that attentional selection during MOT is flexibly determined by task demands as well as tracking load and that visual attention may not always be necessary for accurate tracking.
Catalogue Creation for Space Situational Awareness with Optical Sensors
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, I.; Bessell, T.; Rutten, M.; Gordon, N.; Moretti, N.; Morreale, B.
2016-09-01
In order to safeguard the continued use of space-based technologies, effective monitoring and tracking of man-made resident space objects (RSOs) is paramount. The diverse characteristics, behaviours and trajectories of RSOs make space surveillance a challenging application of the discipline that is tracking and surveillance. When surveillance systems are faced with non-canonical scenarios, it is common for human operators to intervene while researchers adapt and extend traditional tracking techniques in search of a solution. A complementary strategy for improving the robustness of space surveillance systems is to place greater emphasis on the anticipation of uncertainty. Namely, give the system the intelligence necessary to autonomously react to unforeseen events and to intelligently and appropriately act on tenuous information rather than discard it. In this paper we build from our 2015 campaign and describe the progression of a low-cost intelligent space surveillance system capable of autonomously cataloguing and maintaining track of RSOs. It currently exploits robotic electro-optical sensors, high-fidelity state-estimation and propagation as well as constrained initial orbit determination (IOD) to intelligently and adaptively manage its sensors in order to maintain an accurate catalogue of RSOs. In a step towards fully autonomous cataloguing, the system has been tasked with maintaining surveillance of a portion of the geosynchronous (GEO) belt. Using a combination of survey and track-refinement modes, the system is capable of maintaining a track of known RSOs and initiating tracks on previously unknown objects. Uniquely, due to the use of high-fidelity representations of a target's state uncertainty, as few as two images of previously unknown RSOs may be used to subsequently initiate autonomous search and reacquisition. To achieve this capability, particularly within the congested environment of the GEO-belt, we use a constrained admissible region (CAR) to generate a plausible estimate of the unknown RSO's state probability density function and disambiguate measurements using a particle-based joint probability data association (JPDA) method. Additionally, the use of alternative CAR generation methods, incorporating catalogue-based priors, is explored and tested. We also present the findings of two field trials of an experimental system that incorporates these techniques. The results demonstrate that such a system is capable of autonomously searching for an RSO that was briefly observed days prior in a GEO-survey and discriminating it from the measurements of other previously catalogued RSOs.
Rhythmic Sampling within and between Objects despite Sustained Attention at a Cued Location
Fiebelkorn, Ian C.; Saalmann, Yuri B.; Kastner, Sabine
2013-01-01
SUMMARY The brain directs its limited processing resources through various selection mechanisms, broadly referred to as attention. The present study investigated the temporal dynamics of two such selection mechanisms: space- and object-based selection. Previous evidence has demonstrated that preferential processing resulting from a spatial cue (i.e., space-based selection) spreads to uncued locations, if those locations are part of the same object (i.e., resulting in object-based selection). But little is known about the relationship between these fundamental selection mechanisms. Here, we used human behavioral data to determine how space- and object-based selection simultaneously evolve under conditions that promote sustained attention at a cued location, varying the cue-to-target interval from 300—1100 ms. We tracked visual-target detection at a cued location (i.e., space-based selection), at an uncued location that was part of the same object (i.e., object-based selection), and at an uncued location that was part of a different object (i.e., in the absence of space- and object-based selection). The data demonstrate that even under static conditions, there is a moment-to-moment reweighting of attentional priorities based on object properties. This reweighting is revealed through rhythmic patterns of visual-target detection both within (at 8 Hz) and between (at 4 Hz) objects. PMID:24316204
NASA Astrophysics Data System (ADS)
Tüchler, Lukas; Meyer, Vera
2013-04-01
The new radar-data and lightning-data based automatic cell identification, tracking and nowcasting tool A-TNT (Austrian Thunderstorm Nowcasting Tool), which has been developed at ZAMG, has been applied to investigate the appearance of thunderstorms at Europe scale. Based on the ec-TRAM-method [1], the algorithm identifies and monitors regions of intense precipitation and lightning activity separately by analyzing sequential two-dimensional intensity maps of radar precipitation rate or lightning densities, respectively. Each data source is processed by a stand-alone identification, tracking and nowcasting procedure. The two tracking results are combined to a "main" cell in a final step. This approach allows that the output derived from the two data sources complement each other giving a more comprehensive picture about the current storm situation. So it is possible to distinguish between pure precipitation cells and thunderstorms, to observe regions, where one data source is not or poorly available, and to compensate for occasional data failures. Consequently, the combined cell-tracks are expected to be more consistent and the cell-tracking more robust. Input data for radar-cell tracking on European Scale is the OPERA radar-composite, which is provided every 15 minutes on a 2 km x 2 km grid, indicating the location and intensity of precipitation over Europe. For the lightning-cell tracking, the lightning-detection data of the EUCLID network is mapped on the OPERA grid. Every five minutes, flash density maps with recorded strokes are created and analyzed. This study will present a detailed investigation of the quality of the identification and tracking results using radar and lightning data. The improvements concerning the robustness and reliability of the cell tracking achieved by combining both data sources will be shown. Analyses about cell tracks and selected storm parameters like frequency, longevity and area will give insight into occurrence, appearance and impact of different severe precipitation events. These studies are performed to support the project HAREN (Hazard Assessment based on Rainfall European Nowcasts, funded by the EC Directorate General for Humanitarian Aid and Civil Protection), which has the objective to improve warnings for hazards induced by precipitation at local scale all over Europe. REFERENCES: [1] Meyer, V. K., H. Höller, and H. D. Betz 2012: Automated thunderstorm tracking and nowcasting: utilization of three-dimensional lightning and radar data. Manuscript accepted for publication in ACPD.
Vision-based augmented reality system
NASA Astrophysics Data System (ADS)
Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan
2003-04-01
The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.
Detection of dominant flow and abnormal events in surveillance video
NASA Astrophysics Data System (ADS)
Kwak, Sooyeong; Byun, Hyeran
2011-02-01
We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.