Human-like object tracking and gaze estimation with PKD android
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.
2018-01-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193
Human-like object tracking and gaze estimation with PKD android
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.
2016-05-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.
Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.
Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D
2017-10-01
This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.
Visual object recognition and tracking
NASA Technical Reports Server (NTRS)
Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)
2010-01-01
This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.
A data set for evaluating the performance of multi-class multi-object video tracking
NASA Astrophysics Data System (ADS)
Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David
2017-05-01
One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.
Object tracking with stereo vision
NASA Technical Reports Server (NTRS)
Huber, Eric
1994-01-01
A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.
Computer-aided target tracking in motion analysis studies
NASA Astrophysics Data System (ADS)
Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.
1990-08-01
Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.
Obstacle penetrating dynamic radar imaging system
Romero, Carlos E [Livermore, CA; Zumstein, James E [Livermore, CA; Chang, John T [Danville, CA; Leach, Jr Richard R. [Castro Valley, CA
2006-12-12
An obstacle penetrating dynamic radar imaging system for the detection, tracking, and imaging of an individual, animal, or object comprising a multiplicity of low power ultra wideband radar units that produce a set of return radar signals from the individual, animal, or object, and a processing system for said set of return radar signals for detection, tracking, and imaging of the individual, animal, or object. The system provides a radar video system for detecting and tracking an individual, animal, or object by producing a set of return radar signals from the individual, animal, or object with a multiplicity of low power ultra wideband radar units, and processing said set of return radar signals for detecting and tracking of the individual, animal, or object.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Correlation and 3D-tracking of objects by pointing sensors
Griesmeyer, J. Michael
2017-04-04
A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.
Real-time object detection, tracking and occlusion reasoning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divakaran, Ajay; Yu, Qian; Tamrakar, Amir
A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dongkyu, E-mail: akein@gist.ac.kr; Khalil, Hossam; Jo, Youngjoon
2016-06-28
An image-based tracking system using laser scanning vibrometer is developed for vibration measurement of a rotating object. The proposed system unlike a conventional one can be used where the position or velocity sensor such as an encoder cannot be attached to an object. An image processing algorithm is introduced to detect a landmark and laser beam based on their colors. Then, through using feedback control system, the laser beam can track a rotating object.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems
NASA Astrophysics Data System (ADS)
Kopecky, Ken; Winer, Eliot
2014-06-01
Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.
An object detection and tracking system for unmanned surface vehicles
NASA Astrophysics Data System (ADS)
Yang, Jian; Xiao, Yang; Fang, Zhiwen; Zhang, Naiwen; Wang, Li; Li, Tao
2017-10-01
Object detection and tracking are critical parts of unmanned surface vehicles(USV) to achieve automatic obstacle avoidance. Off-the-shelf object detection methods have achieved impressive accuracy in public datasets, though they still meet bottlenecks in practice, such as high time consumption and low detection quality. In this paper, we propose a novel system for USV, which is able to locate the object more accurately while being fast and stable simultaneously. Firstly, we employ Faster R-CNN to acquire several initial raw bounding boxes. Secondly, the image is segmented to a few superpixels. For each initial box, the superpixels inside will be grouped into a whole according to a combination strategy, and a new box is thereafter generated as the circumscribed bounding box of the final superpixel. Thirdly, we utilize KCF to track these objects after several frames, Faster-RCNN is again used to re-detect objects inside tracked boxes to prevent tracking failure as well as remove empty boxes. Finally, we utilize Faster R-CNN to detect objects in the next image, and refine object boxes by repeating the second module of our system. The experimental results demonstrate that our system is fast, robust and accurate, which can be applied to USV in practice.
Accurate object tracking system by integrating texture and depth cues
NASA Astrophysics Data System (ADS)
Chen, Ju-Chin; Lin, Yu-Hang
2016-03-01
A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
Autonomous Space Object Catalogue Construction and Upkeep Using Sensor Control Theory
NASA Astrophysics Data System (ADS)
Moretti, N.; Rutten, M.; Bessell, T.; Morreale, B.
The capability to track objects in space is critical to safeguard domestic and international space assets. Infrequent measurement opportunities, complex dynamics and partial observability of orbital state makes the tracking of resident space objects nontrivial. It is not uncommon for human operators to intervene with space tracking systems, particularly in scheduling sensors. This paper details the development of a system that maintains a catalogue of geostationary objects through dynamically tasking sensors in real time by managing the uncertainty of object states. As the number of objects in space grows the potential for collision grows exponentially. Being able to provide accurate assessment to operators regarding costly collision avoidance manoeuvres is paramount; the accuracy of which is highly dependent on how object states are estimated. The system represents object state and uncertainty using particles and utilises a particle filter for state estimation. Particle filters capture the model and measurement uncertainty accurately, allowing for a more comprehensive representation of the state’s probability density function. Additionally, the number of objects in space is growing disproportionally to the number of sensors used to track them. Maintaining precise positions for all objects places large loads on sensors, limiting the time available to search for new objects or track high priority objects. Rather than precisely track all objects our system manages the uncertainty in orbital state for each object independently. The uncertainty is allowed to grow and sensor data is only requested when the uncertainty must be reduced. For example when object uncertainties overlap leading to data association issues or if the uncertainty grows to beyond a field of view. These control laws are formulated into a cost function, which is optimised in real time to task sensors. By controlling an optical telescope the system has been able to construct and maintain a catalogue of approximately 100 geostationary objects.
Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.
Quesada, Luis; León, Alejandro J
2012-10-01
Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku
2017-01-01
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483
Color Image Processing and Object Tracking System
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.
1996-01-01
This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.
AN/FSY-3 Space Fence System Support of Conjunction Assessment
NASA Astrophysics Data System (ADS)
Koltiska, M.; Du, H.; Prochoda, D.; Kelly, K.
2016-09-01
The Space Fence System is a ground-based space surveillance radar system designed to detect and track all objects in Low Earth Orbit the size of a softball or larger. The system detects many objects that are not currently in the catalog of satellites and space debris that is maintained by the US Air Force. In addition, it will also be capable of tracking many of the deep space objects in the catalog. By providing daily updates of the orbits of these new objects along with updates of most of the objects in the catalog, it will enhance Space Situational Awareness and significantly improve our ability to predict close approaches, aka conjunctions, of objects in space. With this additional capacity for tracking objects in space the Space Surveillance Network has significantly more resources for monitoring orbital debris, especially for debris that could collide with active satellites and other debris.
Vision-based object detection and recognition system for intelligent vehicles
NASA Astrophysics Data System (ADS)
Ran, Bin; Liu, Henry X.; Martono, Wilfung
1999-01-01
Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.
Cortical Circuit for Binding Object Identity and Location During Multiple-Object Tracking
Nummenmaa, Lauri; Oksama, Lauri; Glerean, Erico; Hyönä, Jukka
2017-01-01
Abstract Sustained multifocal attention for moving targets requires binding object identities with their locations. The brain mechanisms of identity-location binding during attentive tracking have remained unresolved. In 2 functional magnetic resonance imaging experiments, we measured participants’ hemodynamic activity during attentive tracking of multiple objects with equivalent (multiple-object tracking) versus distinct (multiple identity tracking, MIT) identities. Task load was manipulated parametrically. Both tasks activated large frontoparietal circuits. MIT led to significantly increased activity in frontoparietal and temporal systems subserving object recognition and working memory. These effects were replicated when eye movements were prohibited. MIT was associated with significantly increased functional connectivity between lateral temporal and frontal and parietal regions. We propose that coordinated activity of this network subserves identity-location binding during attentive tracking. PMID:27913430
Creating objective and measurable postgraduate year 1 residency graduation requirements.
Starosta, Kaitlin; Davis, Susan L; Kenney, Rachel M; Peters, Michael; To, Long; Kalus, James S
2017-03-15
The process of developing objective and measurable postgraduate year 1 (PGY1) residency graduation requirements and a progress tracking system is described. The PGY1 residency accreditation standard requires that programs establish criteria that must be met by residents for successful completion of the program (i.e., graduation requirements), which should presumably be aligned with helping residents to achieve the purpose of residency training. In addition, programs must track a resident's progress toward fulfillment of residency goals and objectives. Defining graduation requirements and establishing the process for tracking residents' progress are left up to the discretion of the residency program. To help standardize resident performance assessments, leaders of an academic medical center-based PGY1 residency program developed graduation requirement criteria that are objective, measurable, and linked back to residency goals and objectives. A system for tracking resident progress relative to quarterly progress targets was instituted. Leaders also developed a focused, on-the-spot skills assessment termed "the Thunderdome," which was designed for objective evaluation of direct patient care skills. Quarterly data on residents' progress are used to update and customize each resident's training plan. Implementation of this system allowed seamless linkage of the training plan, the progress tracking system, and the specified graduation requirement criteria. PGY1 residency requirements that are objective, that are measurable, and that attempt to identify what skills the resident must demonstrate in order to graduate from the program were developed for use in our residency program. A system for tracking the residents' progress by comparing residents' performance to predetermined quarterly benchmarks was developed. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Rodríguez-Canosa, Gonzalo; Giner, Jaime del Cerro; Barrientos, Antonio
2014-01-01
The detection and tracking of mobile objects (DATMO) is progressively gaining importance for security and surveillance applications. This article proposes a set of new algorithms and procedures for detecting and tracking mobile objects by robots that work collaboratively as part of a multirobot system. These surveillance algorithms are conceived of to work with data provided by long distance range sensors and are intended for highly reliable object detection in wide outdoor environments. Contrary to most common approaches, in which detection and tracking are done by an integrated procedure, the approach proposed here relies on a modular structure, in which detection and tracking are carried out independently, and the latter might accept input data from different detection algorithms. Two movement detection algorithms have been developed for the detection of dynamic objects by using both static and/or mobile robots. The solution to the overall problem is based on the use of a Kalman filter to predict the next state of each tracked object. Additionally, new tracking algorithms capable of combining dynamic objects lists coming from either one or various sources complete the solution. The complementary performance of the separated modular structure for detection and identification is evaluated and, finally, a selection of test examples discussed. PMID:24526305
Multi-object tracking of human spermatozoa
NASA Astrophysics Data System (ADS)
Sørensen, Lauge; Østergaard, Jakob; Johansen, Peter; de Bruijne, Marleen
2008-03-01
We propose a system for tracking of human spermatozoa in phase-contrast microscopy image sequences. One of the main aims of a computer-aided sperm analysis (CASA) system is to automatically assess sperm quality based on spermatozoa motility variables. In our case, the problem of assessing sperm quality is cast as a multi-object tracking problem, where the objects being tracked are the spermatozoa. The system combines a particle filter and Kalman filters for robust motion estimation of the spermatozoa tracks. Further, the combinatorial aspect of assigning observations to labels in the particle filter is formulated as a linear assignment problem solved using the Hungarian algorithm on a rectangular cost matrix, making the algorithm capable of handling missing or spurious observations. The costs are calculated using hidden Markov models that express the plausibility of an observation being the next position in the track history of the particle labels. Observations are extracted using a scale-space blob detector utilizing the fact that the spermatozoa appear as bright blobs in a phase-contrast microscope. The output of the system is the complete motion track of each of the spermatozoa. Based on these tracks, different CASA motility variables can be computed, for example curvilinear velocity or straight-line velocity. The performance of the system is tested on three different phase-contrast image sequences of varying complexity, both by visual inspection of the estimated spermatozoa tracks and by measuring the mean squared error (MSE) between the estimated spermatozoa tracks and manually annotated tracks, showing good agreement.
Horowitz, Todd S.; Kuzmova, Yoana
2011-01-01
The evidence is mixed as to whether the visual system treats objects and holes differently. We used a multiple object tracking task to test the hypothesis that figural objects are easier to track than holes. Observers tracked four of eight items (holes or objects). We used an adaptive algorithm to estimate the speed allowing 75% tracking accuracy. In Experiments 1–5, the distinction between holes and figures was accomplished by pictorial cues, while red-cyan anaglyphs were used to provide the illusion of depth in Experiment 6. We variously used Gaussian pixel noise, photographic scenes, or synthetic textures as backgrounds. Tracking was more difficult when a complex background was visible, as opposed to a blank background. Tracking was easier when disks carried fixed, unique markings. When these factors were controlled for, tracking holes was no more difficult than tracking figures, suggesting that they are equivalent stimuli for tracking purposes. PMID:21334361
Real-time Human Activity Recognition
NASA Astrophysics Data System (ADS)
Albukhary, N.; Mustafah, Y. M.
2017-11-01
The traditional Closed-circuit Television (CCTV) system requires human to monitor the CCTV for 24/7 which is inefficient and costly. Therefore, there’s a need for a system which can recognize human activity effectively in real-time. This paper concentrates on recognizing simple activity such as walking, running, sitting, standing and landing by using image processing techniques. Firstly, object detection is done by using background subtraction to detect moving object. Then, object tracking and object classification are constructed so that different person can be differentiated by using feature detection. Geometrical attributes of tracked object, which are centroid and aspect ratio of identified tracked are manipulated so that simple activity can be detected.
Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments
NASA Technical Reports Server (NTRS)
Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi
1994-01-01
Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.; Tso, Kam S.; Litwin, Todd E.; Hayati, Samad A.; Bon, Bruce B.
1991-01-01
Experimental robotic system semiautomatically grasps rotating object, stops rotation, and pulls object to rest in fixture. Based on combination of advanced techniques for sensing and control, constructed to test concepts for robotic recapture of spinning artificial satellites. Potential terrestrial applications for technology developed with help of system includes tracking and grasping of industrial parts on conveyor belts, tracking of vehicles and animals, and soft grasping of moving objects in general.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Multi-view video segmentation and tracking for video surveillance
NASA Astrophysics Data System (ADS)
Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj
2009-05-01
Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.
The role of "rescue saccades" in tracking objects through occlusions.
Zelinsky, Gregory J; Todor, Andrei
2010-12-29
We hypothesize that our ability to track objects through occlusions is mediated by timely assistance from gaze in the form of "rescue saccades"-eye movements to tracked objects that are in danger of being lost due to impending occlusion. Observers tracked 2-4 target sharks (out of 9) for 20 s as they swam through a rendered 3D underwater scene. Targets were either allowed to enter into occlusions (occlusion trials) or not (no occlusion trials). Tracking accuracy with 2-3 targets was ≥ 92% regardless of target occlusion but dropped to 74% on occlusion trials with four targets (no occlusion trials remained accurate; 83%). This pattern was mirrored in the frequency of rescue saccades. Rescue saccades accompanied approximatlely 50% of the Track 2-3 target occlusions, but only 34% of the Track 4 occlusions. Their frequency also decreased with increasing distance between a target and the nearest other object, suggesting that it is the potential for target confusion that summons a rescue saccade, not occlusion itself. These findings provide evidence for a tracking system that monitors for events that might cause track loss (e.g., occlusions) and requests help from the oculomotor system to resolve these momentary crises. As the number of crises increase with the number of targets, some requests for help go unsatisfied, resulting in degraded tracking.
Color image processing and object tracking workstation
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Paulick, Michael J.
1992-01-01
A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.
Hardware accelerator design for tracking in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil
2011-10-01
Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.
Real-time edge tracking using a tactile sensor
NASA Technical Reports Server (NTRS)
Berger, Alan D.; Volpe, Richard; Khosla, Pradeep K.
1989-01-01
Object recognition through the use of input from multiple sensors is an important aspect of an autonomous manipulation system. In tactile object recognition, it is necessary to determine the location and orientation of object edges and surfaces. A controller is proposed that utilizes a tactile sensor in the feedback loop of a manipulator to track along edges. In the control system, the data from the tactile sensor is first processed to find edges. The parameters of these edges are then used to generate a control signal to a hybrid controller. Theory is presented for tactile edge detection and an edge tracking controller. In addition, experimental verification of the edge tracking controller is presented.
Security Applications Of Computer Motion Detection
NASA Astrophysics Data System (ADS)
Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry
1987-05-01
An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.
Real time eye tracking using Kalman extended spatio-temporal context learning
NASA Astrophysics Data System (ADS)
Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu
2017-06-01
Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.
Tracking target objects orbiting earth using satellite-based telescopes
De Vries, Willem H; Olivier, Scot S; Pertica, Alexander J
2014-10-14
A system for tracking objects that are in earth orbit via a constellation or network of satellites having imaging devices is provided. An object tracking system includes a ground controller and, for each satellite in the constellation, an onboard controller. The ground controller receives ephemeris information for a target object and directs that ephemeris information be transmitted to the satellites. Each onboard controller receives ephemeris information for a target object, collects images of the target object based on the expected location of the target object at an expected time, identifies actual locations of the target object from the collected images, and identifies a next expected location at a next expected time based on the identified actual locations of the target object. The onboard controller processes the collected image to identify the actual location of the target object and transmits the actual location information to the ground controller.
Kumar, Aditya; Shi, Ruijie; Kumar, Rajeeva; Dokucu, Mustafa
2013-04-09
Control system and method for controlling an integrated gasification combined cycle (IGCC) plant are provided. The system may include a controller coupled to a dynamic model of the plant to process a prediction of plant performance and determine a control strategy for the IGCC plant over a time horizon subject to plant constraints. The control strategy may include control functionality to meet a tracking objective and control functionality to meet an optimization objective. The control strategy may be configured to prioritize the tracking objective over the optimization objective based on a coordinate transformation, such as an orthogonal or quasi-orthogonal projection. A plurality of plant control knobs may be set in accordance with the control strategy to generate a sequence of coordinated multivariable control inputs to meet the tracking objective and the optimization objective subject to the prioritization resulting from the coordinate transformation.
Detecting impossible changes in infancy: a three-system account
Wang, Su-hua; Baillargeon, Renée
2012-01-01
Can infants detect that an object has magically disappeared, broken apart or changed color while briefly hidden? Recent research suggests that infants detect some but not other ‘impossible’ changes; and that various contextual manipulations can induce infants to detect changes they would not otherwise detect. We present an account that includes three systems: a physical-reasoning, an object-tracking, and an object-representation system. What impossible changes infants detect depends on what object information is included in the physical-reasoning system; this information becomes subject to a principle of persistence, which states that objects can undergo no spontaneous or uncaused change. What contextual manipulations induce infants to detect impossible changes depends on complex interplays between the physical-reasoning system and the object-tracking and object-representation systems. PMID:18078778
NASA Astrophysics Data System (ADS)
Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang
2018-01-01
Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)
1987-01-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
High-performance object tracking and fixation with an online neural estimator.
Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian
2007-02-01
Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.
NASA Astrophysics Data System (ADS)
Bouaynaya, N.; Schonfeld, Dan
2005-03-01
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects
2014-09-01
based laser systems can be limited by the effects of tumbling, extremely accurate Doppler measurement is possible using a doublet coherent laser ...Doublet pulse coherent laser radar for tracking of resident space objects Narasimha S. Prasad *1 , Van Rudd 2 , Scott Shald 2 , Stephan...Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera
NASA Astrophysics Data System (ADS)
Dziri, Aziz; Duranton, Marc; Chapuis, Roland
2016-07-01
Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.
Real-time optical holographic tracking of multiple objects
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Liu, Hua-Kuang
1989-01-01
A coherent optical correlation technique for real-time simultaneous tracking of several different objects making independent movements is described, and experimental results are presented. An evaluation of this system compared with digital computing systems is made. The real-time processing capability is obtained through the use of a liquid crystal television spatial light modulator and a dichromated gelatin multifocus hololens. A coded reference beam is utilized in the separation of the output correlation plane associated with each input target so that independent tracking can be achieved.
An inexpensive programmable illumination microscope with active feedback.
Tompkins, Nathan; Fraden, Seth
2016-02-01
We have developed a programmable illumination system capable of tracking and illuminating numerous objects simultaneously using only low-cost and reused optical components. The active feedback control software allows for a closed-loop system that tracks and perturbs objects of interest automatically. Our system uses a static stage where the objects of interest are tracked computationally as they move across the field of view allowing for a large number of simultaneous experiments. An algorithmically determined illumination pattern can be applied anywhere in the field of view with simultaneous imaging and perturbation using different colors of light to enable spatially and temporally structured illumination. Our system consists of a consumer projector, camera, 35-mm camera lens, and a small number of other optical and scaffolding components. The entire apparatus can be assembled for under $4,000.
Object tracking with adaptive HOG detector and adaptive Rao-Blackwellised particle filter
NASA Astrophysics Data System (ADS)
Rosa, Stefano; Paleari, Marco; Ariano, Paolo; Bona, Basilio
2012-01-01
Scenarios for a manned mission to the Moon or Mars call for astronaut teams to be accompanied by semiautonomous robots. A prerequisite for human-robot interaction is the capability of successfully tracking humans and objects in the environment. In this paper we present a system for real-time visual object tracking in 2D images for mobile robotic systems. The proposed algorithm is able to specialize to individual objects and to adapt to substantial changes in illumination and object appearance during tracking. The algorithm is composed by two main blocks: a detector based on Histogram of Oriented Gradient (HOG) descriptors and linear Support Vector Machines (SVM), and a tracker which is implemented by an adaptive Rao-Blackwellised particle filter (RBPF). The SVM is re-trained online on new samples taken from previous predicted positions. We use the effective sample size to decide when the classifier needs to be re-trained. Position hypotheses for the tracked object are the result of a clustering procedure applied on the set of particles. The algorithm has been tested on challenging video sequences presenting strong changes in object appearance, illumination, and occlusion. Experimental tests show that the presented method is able to achieve near real-time performances with a precision of about 7 pixels on standard video sequences of dimensions 320 × 240.
Qin, Lei; Snoussi, Hichem; Abdallah, Fahed
2014-01-01
We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883
Remote gaze tracking system for 3D environments.
Congcong Liu; Herrup, Karl; Shi, Bertram E
2017-07-01
Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.
Parallel computation of level set method for 500 Hz visual servo control
NASA Astrophysics Data System (ADS)
Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi
2008-11-01
We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.
Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning
NASA Astrophysics Data System (ADS)
Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.
2018-04-01
At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.
Lagrangian 3D tracking of fluorescent microscopic objects in motion
NASA Astrophysics Data System (ADS)
Darnige, T.; Figueroa-Morales, N.; Bohec, P.; Lindner, A.; Clément, E.
2017-05-01
We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.
Lagrangian 3D tracking of fluorescent microscopic objects in motion.
Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E
2017-05-01
We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.
Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi
2008-10-01
Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.
Baigzadehnoe, Barmak; Rahmani, Zahra; Khosravi, Alireza; Rezaie, Behrooz
2017-09-01
In this paper, the position and force tracking control problem of cooperative robot manipulator system handling a common rigid object with unknown dynamical models and unknown external disturbances is investigated. The universal approximation properties of fuzzy logic systems are employed to estimate the unknown system dynamics. On the other hand, by defining new state variables based on the integral and differential of position and orientation errors of the grasped object, the error system of coordinated robot manipulators is constructed. Subsequently by defining the appropriate change of coordinates and using the backstepping design strategy, an adaptive fuzzy backstepping position tracking control scheme is proposed for multi-robot manipulator systems. By utilizing the properties of internal forces, extra terms are also added to the control signals to consider the force tracking problem. Moreover, it is shown that the proposed adaptive fuzzy backstepping position/force control approach ensures all the signals of the closed loop system uniformly ultimately bounded and tracking errors of both positions and forces can converge to small desired values by proper selection of the design parameters. Finally, the theoretic achievements are tested on the two three-link planar robot manipulators cooperatively handling a common object to illustrate the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
2013-01-01
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods. PMID:23843739
Memory-based multiagent coevolution modeling for robust moving object tracking.
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
2013-01-01
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.
NASA Astrophysics Data System (ADS)
Warren, Ryan Duwain
Three primary objectives were defined for this work. The first objective was to determine, assess, and compare the performance, heat transfer characteristics, economics, and feasibility of real-world stationary and dual-axis tracking grid-connected photovoltaic (PV) systems in the Upper Midwest. This objective was achieved by installing two grid-connected PV systems with different mounting schemes in central Iowa, implementing extensive data acquisition systems, monitoring operation of the PV systems for one full year, and performing detailed experimental performance and economic studies. The two PV systems that were installed, monitored, and analyzed included a 4.59 kWp roof-mounted stationary system oriented for maximum annual energy production, and a 1.02 kWp pole-mounted actively controlled dual-axis tracking system. The second objective was to demonstrate the actual use and performance of real-world stationary and dual-axis tracking grid-connected PV systems used for building energy generation applications. This objective was achieved by offering the installed PV systems to the public for demonstration purposes and through the development of three computer-based tools: a software interface that has the ability to display real-time and historical performance and meteorological data of both systems side-by-side, a software interface that shows real-time and historical video and photographs of each system, and a calculator that can predict performance and economics of stationary and dual-axis tracking grid-connected PV systems at various locations in the United States. The final objective was to disseminate this work to social, professional, scientific, and academic communities in a way that is applicable, objective, accurate, accessible, and comprehensible. This final objective will be addressed by publishing the results of this work and making the computer-based tools available on a public website (www.energy.iastate.edu/Renewable/solar). Detailed experimental performance analyses were performed for both systems; results were quantified and compared between systems, focusing on measures of solar resource, energy generation, power production, and efficiency. This work also presents heat transfer characteristics of both arrays and quantifies the affects of operating temperature on PV system performance in terms of overall heat transfer coefficients and temperature coefficients for power. To assess potential performance of PV in the Upper Midwest, models were built to predict performance of the PV systems operating at lower temperatures. Economic analyses were performed for both systems focusing on measures of life-cycle cost, payback period, internal rate of return, and average incremental cost of solar energy. The potential economic feasibility of grid-connected stationary PV systems used for building energy generation in the Upper Midwest was assessed under assumptions of higher utility energy costs, lower initial installed costs, and different metering agreements. The annual average daily solar insolation seen by the stationary and dual-axis tracking systems was found to be 4.37 and 5.95 kWh/m2, respectively. In terms of energy generation, the tracking system outperformed the stationary system on annual, monthly, and often daily bases; normalized annual energy generation for the tracking and stationary systems were found to be 1,779 and 1,264 kWh/kWp, respectively. The annual average conversion efficiencies of the tracking and stationary systems were found to be approximately 11 and 10.7 percent, respectively. Annual performance ratio values of the tracking and stationary system were found to be 0.819 and 0.792, respectively. The net present values of both systems under all assumed discount rates were determined to be negative. Further, neither system was found to have a payback period less than the assumed system life of 25 years. The rate-of-return of the stationary and tracking systems were found to be -3.3 and -4.9 percent, respectively. Furthermore, the average incremental cost of energy provided by the stationary and dual-axis tracking systems over their assumed useful life is projected to be 0.31 and 0.37 dollars per kWh, respectively. Results of this study suggest that grid-connected PV systems used for building energy generation in the Upper Midwest are not yet economically feasible when compared to a range of alternative investments; however, PV systems could show feasibility under more favorable economic scenarios. Throughout the year of monitoring, array operating temperatures ranged from -24.7°C (-12.4°F) to 61.7°C (143.1°F) for the stationary system and -23.9 °C (-11°F) to 52.7°C (126.9°F) for the dual-axis tracking system during periods of system operation. The hourly average overall heat transfer coefficients for solar irradiance levels greater than 200 W/m 2 for the stationary and dual-axis tracking systems were found to be 20.8 and 29.4 W/m2°C, respectively. The experimental temperature coefficients for power for the stationary and dual-axis tracking systems at a solar irradiance level of 1,000 W/m2 were -0.30 and -0.38 %/°C, respectively. Simulations of the stationary and dual-axis tracking systems operating at lower temperatures suggest that annual conversion efficiencies could potentially be increased by to up 4.3 and 4.6 percent, respectively.
Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding
Li, Xin; Guo, Rui; Chen, Chao
2014-01-01
Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216
An inexpensive programmable illumination microscope with active feedback
Tompkins, Nathan; Fraden, Seth
2016-01-01
We have developed a programmable illumination system capable of tracking and illuminating numerous objects simultaneously using only low-cost and reused optical components. The active feedback control software allows for a closed-loop system that tracks and perturbs objects of interest automatically. Our system uses a static stage where the objects of interest are tracked computationally as they move across the field of view allowing for a large number of simultaneous experiments. An algorithmically determined illumination pattern can be applied anywhere in the field of view with simultaneous imaging and perturbation using different colors of light to enable spatially and temporally structured illumination. Our system consists of a consumer projector, camera, 35-mm camera lens, and a small number of other optical and scaffolding components. The entire apparatus can be assembled for under $4,000. PMID:27642182
Textual and shape-based feature extraction and neuro-fuzzy classifier for nuclear track recognition
NASA Astrophysics Data System (ADS)
Khayat, Omid; Afarideh, Hossein
2013-04-01
Track counting algorithms as one of the fundamental principles of nuclear science have been emphasized in the recent years. Accurate measurement of nuclear tracks on solid-state nuclear track detectors is the aim of track counting systems. Commonly track counting systems comprise a hardware system for the task of imaging and software for analysing the track images. In this paper, a track recognition algorithm based on 12 defined textual and shape-based features and a neuro-fuzzy classifier is proposed. Features are defined so as to discern the tracks from the background and small objects. Then, according to the defined features, tracks are detected using a trained neuro-fuzzy system. Features and the classifier are finally validated via 100 Alpha track images and 40 training samples. It is shown that principle textual and shape-based features concomitantly yield a high rate of track detection compared with the single-feature based methods.
2003-09-03
KENNEDY SPACE CENTER, FLA. - Workers calibrate a tracking telescope, part of the Distant Object Attitude Measurement System (DOAMS), located in Cocoa Beach, Fla. The telescope provides optical support for launches from KSC and Cape Canaveral.
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
Multiple Hypothesis Tracking (MHT) for Space Surveillance: Results and Simulation Studies
NASA Astrophysics Data System (ADS)
Singh, N.; Poore, A.; Sheaff, C.; Aristoff, J.; Jah, M.
2013-09-01
With the anticipated installation of more accurate sensors and the increased probability of future collisions between space objects, the potential number of observable space objects is likely to increase by an order of magnitude within the next decade, thereby placing an ever-increasing burden on current operational systems. Moreover, the need to track closely-spaced objects due, for example, to breakups as illustrated by the recent Chinese ASAT test or the Iridium-Kosmos collision, requires new, robust, and autonomous methods for space surveillance to enable the development and maintenance of the present and future space catalog and to support the overall space surveillance mission. The problem of correctly associating a stream of uncorrelated tracks (UCTs) and uncorrelated optical observations (UCOs) into common objects is critical to mitigating the number of UCTs and is a prerequisite to subsequent space catalog maintenance. Presently, such association operations are mainly performed using non-statistical simple fixed-gate association logic. In this paper, we report on the salient features and the performance of a newly-developed statistically-robust system-level multiple hypothesis tracking (MHT) system for advanced space surveillance. The multiple-frame assignment (MFA) formulation of MHT, together with supporting astrodynamics algorithms, provides a new joint capability for space catalog maintenance, UCT/UCO resolution, and initial orbit determination. The MFA-MHT framework incorporates multiple hypotheses for report to system track data association and uses a multi-arc construction to accommodate recently developed algorithms for multiple hypothesis filtering (e.g., AEGIS, CAR-MHF, UMAP, and MMAE). This MHT framework allows us to evaluate the benefits of many different algorithms ranging from single- and multiple-frame data association to filtering and uncertainty quantification. In this paper, it will be shown that the MHT system can provide superior tracking performance compared to existing methods at a lower computational cost, especially for closely-spaced objects, in realistic multi-sensor multi-object tracking scenarios over multiple regimes of space. Specifically, we demonstrate that the prototype MHT system can accurately and efficiently process tens of thousands of UCTs and angles-only UCOs emanating from thousands of objects in LEO, GEO, MEO and HELO, many of which are closely-spaced, in real-time on a single laptop computer, thereby making it well-suited for large-scale breakup and tracking scenarios. This is possible in part because complexity reduction techniques are used to control the runtime of MHT without sacrificing accuracy. We assess the performance of MHT in relation to other tracking methods in multi-target, multi-sensor scenarios ranging from easy to difficult (i.e., widely-spaced objects to closely-spaced objects), using realistic physics and probabilities of detection less than one. In LEO, it is shown that the MHT system is able to address the challenges of processing breakups by analyzing multiple frames of data simultaneously in order to improve association decisions, reduce cross-tagging, and reduce unassociated UCTs. As a result, the multi-frame MHT system can establish orbits up to ten times faster than single-frame methods. Finally, it is shown that in GEO, MEO and HELO, the MHT system is able to address the challenges of processing angles-only optical observations by providing a unified multi-frame framework.
Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.
Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W
2016-12-14
The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.
2015-03-27
i.e., temporarily focusing on one object instead of wide area survey) or SOI collection on high interest objects (e.g., unidentified objects ...The Air Force Institute of Technology has spent the last seven years conducting research on orbit identification and object characterization of space... objects through the use of commercial-off-the-shelf hardware systems controlled via custom software routines, referred to simply as TeleTrak. Year
Integrated track stability assessment and monitoring system (ITSAMS).
DOT National Transportation Integrated Search
2006-10-01
The overall objective of project is to continue the development of remote sensing : technologies that can be integrated and deployed in a mobile inspection vehicle i.e. Integrated : Track Stability Assessment and Monitoring System (ITSAMS).
Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.
Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena
2014-11-01
A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.
2003-09-03
KENNEDY SPACE CENTER, FLA. - A worker calibrates a tracking telescope, part of the Distant Object Attitude Measurement System (DOAMS), located in Cocoa Beach, Fla. The telescope provides optical support for launches from KSC and Cape Canaveral.
Visual perception system and method for a humanoid robot
NASA Technical Reports Server (NTRS)
Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)
2012-01-01
A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.
NASA Astrophysics Data System (ADS)
Oku, H.; Ogawa, N.; Ishikawa, M.; Hashimoto, K.
2005-03-01
In this article, a micro-organism tracking system using a high-speed vision system is reported. This system two dimensionally tracks a freely swimming micro-organism within the field of an optical microscope by moving a chamber of target micro-organisms based on high-speed visual feedback. The system we developed could track a paramecium using various imaging techniques, including bright-field illumination, dark-field illumination, and differential interference contrast, at magnifications of 5 times and 20 times. A maximum tracking duration of 300s was demonstrated. Also, the system could track an object with a velocity of up to 35 000μm/s (175diameters/s), which is significantly faster than swimming micro-organisms.
A Fast MEANSHIFT Algorithm-Based Target Tracking System
Sun, Jian
2012-01-01
Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397
Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S.; Rudd, Van; Shald, Scott; Sandford, Stephen; Dimarcantonio, Albert
2014-01-01
In this paper, the development of a long range ladar system known as ExoSPEAR at NASA Langley Research Center for tracking rapidly moving resident space objects is discussed. Based on 100 W, nanosecond class, near-IR laser, this ladar system with coherent detection technique is currently being investigated for short dwell time measurements of resident space objects (RSOs) in LEO and beyond for space surveillance applications. This unique ladar architecture is configured using a continuously agile doublet-pulse waveform scheme coupled to a closed-loop tracking and control loop approach to simultaneously achieve mm class range precision and mm/s velocity precision and hence obtain unprecedented track accuracies. Salient features of the design architecture followed by performance modeling and engagement simulations illustrating the dependence of range and velocity precision in LEO orbits on ladar parameters are presented. Estimated limits on detectable optical cross sections of RSOs in LEO orbits are discussed.
Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.
Souto, David; Kerzel, Dirk
2013-02-06
Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.
Near-real-time biplanar fluoroscopic tracking system for the video tumor fighter
NASA Astrophysics Data System (ADS)
Lawson, Michael A.; Wika, Kevin G.; Gilles, George T.; Ritter, Rogers C.
1991-06-01
We have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the 'Video Tumor Fighter' (VTF). The software was written for a Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally-oriented, visible- light cameras and a simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented.
Simultaneous Tracking of Multiple Points Using a Wiimote
ERIC Educational Resources Information Center
Skeffington, Alex; Scully, Kyle
2012-01-01
This paper reviews the construction of an inexpensive motion tracking and data logging system, which can be used for a wide variety of teaching experiments ranging from entry-level physics courses to advanced courses. The system utilizes an affordable infrared camera found in a Nintendo Wiimote to track IR LEDs mounted to the objects to be…
NASA Astrophysics Data System (ADS)
Griffiths, D.; Boehm, J.
2018-05-01
With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.
Object Tracking Vision System for Mapping the UCN τ Apparatus Volume
NASA Astrophysics Data System (ADS)
Lumb, Rowan; UCNtau Collaboration
2016-09-01
The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.
Visual tracking using objectness-bounding box regression and correlation filters
NASA Astrophysics Data System (ADS)
Mbelwa, Jimmy T.; Zhao, Qingjie; Lu, Yao; Wang, Fasheng; Mbise, Mercy
2018-03-01
Visual tracking is a fundamental problem in computer vision with extensive application domains in surveillance and intelligent systems. Recently, correlation filter-based tracking methods have shown a great achievement in terms of robustness, accuracy, and speed. However, such methods have a problem of dealing with fast motion (FM), motion blur (MB), illumination variation (IV), and drifting caused by occlusion (OCC). To solve this problem, a tracking method that integrates objectness-bounding box regression (O-BBR) model and a scheme based on kernelized correlation filter (KCF) is proposed. The scheme based on KCF is used to improve the tracking performance of FM and MB. For handling drift problem caused by OCC and IV, we propose objectness proposals trained in bounding box regression as prior knowledge to provide candidates and background suppression. Finally, scheme KCF as a base tracker and O-BBR are fused to obtain a state of a target object. Extensive experimental comparisons of the developed tracking method with other state-of-the-art trackers are performed on some of the challenging video sequences. Experimental comparison results show that our proposed tracking method outperforms other state-of-the-art tracking methods in terms of effectiveness, accuracy, and robustness.
Simultaneous Tracking of Multiple Points Using a Wiimote
NASA Astrophysics Data System (ADS)
Skeffington, Alex; Scully, Kyle
2012-11-01
This paper reviews the construction of an inexpensive motion tracking and data logging system, which can be used for a wide variety of teaching experiments ranging from entry-level physics courses to advanced courses. The system utilizes an affordable infrared camera found in a Nintendo Wiimote to track IR LEDs mounted to the objects to be tracked. Two quick experiments are presented using the motion tracking system to demonstrate the diversity of tasks this system can handle. The first experiment uses the Wiimote to record the harmonic motion of oscillating masses on a near-frictionless surface, while the second experiment uses the Wiimote as part of a feedback mechanism in a rotational system. The construction, capabilities, demonstrations, and suggested improvements of the system are reported here.
2010-01-01
Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures. PMID:20377897
A Standard-Compliant Virtual Meeting System with Active Video Object Tracking
NASA Astrophysics Data System (ADS)
Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting
2002-12-01
This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
System safety management lessons learned from the US Army acquisition process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piatt, J.A.
1989-05-01
The Assistant Secretary of the Army for Research, Development and Acquisition directed the Army Safety Center to provide an audit of the causes of accidents and safety of use restrictions on recently fielded systems by tracking residual hazards back through the acquisition process. The objective was to develop lessons learned'' that could be applied to the acquisition process to minimize mishaps in fielded systems. System safety management lessons learned are defined as Army practices or policies, derived from past successes and failures, that are expected to be effective in eliminating or reducing specific systemic causes of residual hazards. They aremore » broadly applicable and supportive of the Army structure and acquisition objectives. Pacific Northwest Laboratory (PNL) was given the task of conducting an independent, objective appraisal of the Army's system safety program in the context of the Army materiel acquisition process by focusing on four fielded systems which are products of that process. These systems included the Apache helicopter, the Bradley Fighting Vehicle (BFV), the Tube Launched, Optically Tracked, Wire Guided (TOW) Missile and the High Mobility Multipurpose Wheeled Vehicle (HMMWV). The objective of this study was to develop system safety management lessons learned associated with the acquisition process. The first step was to identify residual hazards associated with the selected systems. Since it was impossible to track all residual hazards through the acquisition process, certain well-known, high visibility hazards were selected for detailed tracking. These residual hazards illustrate a variety of systemic problems. Systemic or process causes were identified for each residual hazard and analyzed to determine why they exist. System safety management lessons learned were developed to address related systemic causal factors. 29 refs., 5 figs.« less
Specialization of Perceptual Processes.
1994-09-01
population rose and fell, furniture was rearranged, a small mountain range was built in part of the lab (really), carpets were shampooed , and oce lighting...common task is the tracking of moving objects. Coombs [22] implemented a system 44 for xating and tracking objects using a stereo eye/ head system...be a person (person?). Finally, a motion unit is used to detect foot gestures. A pair of nod-of-the- head detectors were implemented and tested, but
Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis
Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan
2015-01-01
Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761
Automated multiple target detection and tracking in UAV videos
NASA Astrophysics Data System (ADS)
Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie
2010-04-01
In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.
Real-time tracking of objects for a KC-135 microgravity experiment
NASA Technical Reports Server (NTRS)
Littlefield, Mark L.
1994-01-01
The design of a visual tracking system for use on the Extra-Vehicular Activity Helper/Retriever (EVAHR) is discussed. EVAHR is an autonomous robot designed to perform numerous tasks in an orbital microgravity environment. Since the ability to grasp a freely translating and rotating object is vital to the robot's mission, the EVAHR must analyze range image generated by the primary sensor. This allows EVAHR to locate and focus its sensors so that an accurate set of object poses can be determined and a grasp strategy planned. To test the visual tracking system being developed, a mathematical simulation was used to model the space station environment and maintain dynamics on the EVAHR and any other free floating objects. A second phase of the investigation consists of a series of experiments carried out aboard a KC-135 aircraft flying a parabolic trajectory to simulate microgravity.
Attentional Resources in Visual Tracking through Occlusion: The High-Beams Effect
ERIC Educational Resources Information Center
Flombaum, Jonathan I.; Scholl, Brian J.; Pylyshyn, Zenon W.
2008-01-01
A considerable amount of research has uncovered heuristics that the visual system employs to keep track of objects through periods of occlusion. Relatively little work, by comparison, has investigated the online resources that support this processing. We explored how attention is distributed when featurally identical objects become occluded during…
Hamahashi, Shugo; Onami, Shuichi; Kitano, Hiroaki
2005-01-01
Background The ability to detect nuclei in embryos is essential for studying the development of multicellular organisms. A system of automated nuclear detection has already been tested on a set of four-dimensional (4D) Nomarski differential interference contrast (DIC) microscope images of Caenorhabditis elegans embryos. However, the system needed laborious hand-tuning of its parameters every time a new image set was used. It could not detect nuclei in the process of cell division, and could detect nuclei only from the two- to eight-cell stages. Results We developed a system that automates the detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. Local image entropy is used to produce regions of the images that have the image texture of the nucleus. From these regions, those that actually detect nuclei are manually selected at the first and last time points of the image set, and an object-tracking algorithm then selects regions that detect nuclei in between the first and last time points. The use of local image entropy makes the system applicable to multiple image sets without the need to change its parameter values. The use of an object-tracking algorithm enables the system to detect nuclei in the process of cell division. The system detected nuclei with high sensitivity and specificity from the one- to 24-cell stages. Conclusion A combination of local image entropy and an object-tracking algorithm enabled highly objective and productive detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. The system will facilitate genomic and computational analyses of C. elegans embryos. PMID:15910690
The role of visual attention in multiple object tracking: evidence from ERPs.
Doran, Matthew M; Hoffman, James E
2010-01-01
We examined the role of visual attention in the multiple object tracking (MOT) task by measuring the amplitude of the N1 component of the event-related potential (ERP) to probe flashes presented on targets, distractors, or empty background areas. We found evidence that visual attention enhances targets and suppresses distractors (Experiment 1 & 3). However, we also found that when tracking load was light (two targets and two distractors), accurate tracking could be carried out without any apparent contribution from the visual attention system (Experiment 2). Our results suggest that attentional selection during MOT is flexibly determined by task demands as well as tracking load and that visual attention may not always be necessary for accurate tracking.
Fast object reconstruction in block-based compressive low-light-level imaging
NASA Astrophysics Data System (ADS)
Ke, Jun; Sui, Dong; Wei, Ping
2014-11-01
In this paper we propose a simply yet effective and efficient method for long-term object tracking. Different from traditional visual tracking method which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion problem. To summarize, our algorithm can be roughly decomposed in a initialization stage and a tracking stage. In the initialization stage, an offline classifier is trained to get the object appearance information in category level. When the video stream is coming, the pre-trained offline classifier is used for detecting the potential target and initializing the tracking stage. In the tracking stage, it consists of three parts which are online tracking part, offline tracking part and confidence judgment part. Online tracking part captures the specific target appearance information while detection part localizes the object based on the pre-trained offline classifier. Since there is no data dependence between online tracking and offline detection, these two parts are running in parallel to significantly improve the processing speed. A confidence selection mechanism is proposed to optimize the object location. Besides, we also propose a simple mechanism to judge the absence of the object. If the target is lost, the pre-trained offline classifier is utilized to re-initialize the whole algorithm as long as the target is re-located. During experiment, we evaluate our method on several challenging video sequences and demonstrate competitive results.
Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.
2014-01-01
The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267
Feasibility of real-time location systems in monitoring recovery after major abdominal surgery.
Dorrell, Robert D; Vermillion, Sarah A; Clark, Clancy J
2017-12-01
Early mobilization after major abdominal surgery decreases postoperative complications and length of stay, and has become a key component of enhanced recovery pathways. However, objective measures of patient movement after surgery are limited. Real-time location systems (RTLS), typically used for asset tracking, provide a novel approach to monitoring in-hospital patient activity. The current study investigates the feasibility of using RTLS to objectively track postoperative patient mobilization. The real-time location system employs a meshed network of infrared and RFID sensors and detectors that sample device locations every 3 s resulting in over 1 million data points per day. RTLS tracking was evaluated systematically in three phases: (1) sensitivity and specificity of the tracking device using simulated patient scenarios, (2) retrospective passive movement analysis of patient-linked equipment, and (3) prospective observational analysis of a patient-attached tracking device. RTLS tracking detected a simulated movement out of a room with sensitivity of 91% and specificity 100%. Specificity decreased to 75% if time out of room was less than 3 min. All RTLS-tagged patient-linked equipment was identified for 18 patients, but measurable patient movement associated with equipment was detected for only 2 patients (11%) with 1-8 out-of-room walks per day. Ten patients were prospectively monitored using RTLS badges following major abdominal surgery. Patient movement was recorded using patient diaries, direct observation, and an accelerometer. Sensitivity and specificity of RTLS patient tracking were both 100% in detecting out-of-room ambulation and correlated well with direct observation and patient-reported ambulation. Real-time location systems are a novel technology capable of objectively and accurately monitoring patient movement and provide an innovative approach to promoting early mobilization after surgery.
Long-term object tracking combined offline with online learning
NASA Astrophysics Data System (ADS)
Hu, Mengjie; Wei, Zhenzhong; Zhang, Guangjun
2016-04-01
We propose a simple yet effective method for long-term object tracking. Different from the traditional visual tracking method, which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion. To summarize, our algorithm can be roughly decomposed into an initialization stage and a tracking stage. In the initialization stage, an offline detector is trained to get the object appearance information at the category level, which is used for detecting the potential target and initializing the tracking stage. The tracking stage consists of three modules: the online tracking module, detection module, and decision module. A pretrained detector is used for maintaining drift of the online tracker, while the online tracker is used for filtering out false positive detections. A confidence selection mechanism is proposed to optimize the object location based on the online tracker and detection. If the target is lost, the pretrained detector is utilized to reinitialize the whole algorithm when the target is relocated. During experiments, we evaluate our method on several challenging video sequences, and it demonstrates huge improvement compared with detection and online tracking only.
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
Virtual target tracking (VTT) as applied to mobile satellite communication networks
NASA Astrophysics Data System (ADS)
Amoozegar, Farid
1999-08-01
Traditionally, target tracking has been used for aerospace applications, such as, tracking highly maneuvering targets in a cluttered environment for missile-to-target intercept scenarios. Although the speed and maneuvering capability of current aerospace targets demand more efficient algorithms, many complex techniques have already been proposed in the literature, which primarily cover the defense applications of tracking methods. On the other hand, the rapid growth of Global Communication Systems, Global Information Systems (GIS), and Global Positioning Systems (GPS) is creating new and more diverse challenges for multi-target tracking applications. Mobile communication and computing can very well appreciate a huge market for Cellular Communication and Tracking Devices (CCTD), which will be tracking networked devices at the cellular level. The objective of this paper is to introduce a new concept, i.e., Virtual Target Tracking (VTT) for commercial applications of multi-target tracking algorithms and techniques as applied to mobile satellite communication networks. It would be discussed how Virtual Target Tracking would bring more diversity to target tracking research.
Track-to-track association for object matching in an inter-vehicle communication system
NASA Astrophysics Data System (ADS)
Yuan, Ting; Roth, Tobias; Chen, Qi; Breu, Jakob; Bogdanovic, Miro; Weiss, Christian A.
2015-09-01
Autonomous driving poses unique challenges for vehicle environment perception due to the complex driving environment the autonomous vehicle finds itself in and differentiates from remote vehicles. Due to inherent uncertainty of the traffic environments and incomplete knowledge due to sensor limitation, an autonomous driving system using only local onboard sensor information is generally not sufficiently enough for conducting a reliable intelligent driving with guaranteed safety. In order to overcome limitations of the local (host) vehicle sensing system and to increase the likelihood of correct detections and classifications, collaborative information from cooperative remote vehicles could substantially facilitate effectiveness of vehicle decision making process. Dedicated Short Range Communication (DSRC) system provides a powerful inter-vehicle wireless communication channel to enhance host vehicle environment perceiving capability with the aid of transmitted information from remote vehicles. However, there is a major challenge before one can fuse the DSRC-transmitted remote information and host vehicle Radar-observed information (in the present case): the remote DRSC data must be correctly associated with the corresponding onboard Radar data; namely, an object matching problem. Direct raw data association (i.e., measurement-to-measurement association - M2MA) is straightforward but error-prone, due to inherent uncertain nature of the observation data. The uncertainties could lead to serious difficulty in matching decision, especially, using non-stationary data. In this study, we present an object matching algorithm based on track-to-track association (T2TA) and evaluate the proposed approach with prototype vehicles in real traffic scenarios. To fully exploit potential of the DSRC system, only GPS position data from remote vehicle are used in fusion center (at host vehicle), i.e., we try to get what we need from the least amount of information; additional feature information can help the data association but are not currently considered. Comparing to M2MA, benefits of the T2TA object matching approach are: i) tracks taking into account important statistical information can provide more reliable inference results; ii) the track-formed smoothed trajectories can be used for an easier shape matching; iii) each local vehicle can design its own tracker and sends only tracks to fusion center to alleviate communication constraints. A real traffic study with different driving environments, based on a statistical hypothesis test, shows promising object matching results of significant practical implications.
2011-02-07
Sensor UGVs (SUGV) or Disruptor UGVs, depending on their payload. The SUGVs included vision, GPS/IMU, and LIDAR systems for identifying and tracking...employed by all the MAGICian research groups. Objects of interest were tracked using standard LIDAR and Computer Vision template-based feature...tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous Locali- zation and Mapping ( SLAM ). Our system contains
Object Acquisition and Tracking for Space-Based Surveillance
1991-11-27
on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect , and can...smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Automatic multiple zebrafish larvae tracking in unconstrained microscopic video conditions.
Wang, Xiaoying; Cheng, Eva; Burnett, Ian S; Huang, Yushi; Wlodkowic, Donald
2017-12-14
The accurate tracking of zebrafish larvae movement is fundamental to research in many biomedical, pharmaceutical, and behavioral science applications. However, the locomotive characteristics of zebrafish larvae are significantly different from adult zebrafish, where existing adult zebrafish tracking systems cannot reliably track zebrafish larvae. Further, the far smaller size differentiation between larvae and the container render the detection of water impurities inevitable, which further affects the tracking of zebrafish larvae or require very strict video imaging conditions that typically result in unreliable tracking results for realistic experimental conditions. This paper investigates the adaptation of advanced computer vision segmentation techniques and multiple object tracking algorithms to develop an accurate, efficient and reliable multiple zebrafish larvae tracking system. The proposed system has been tested on a set of single and multiple adult and larvae zebrafish videos in a wide variety of (complex) video conditions, including shadowing, labels, water bubbles and background artifacts. Compared with existing state-of-the-art and commercial multiple organism tracking systems, the proposed system improves the tracking accuracy by up to 31.57% in unconstrained video imaging conditions. To facilitate the evaluation on zebrafish segmentation and tracking research, a dataset with annotated ground truth is also presented. The software is also publicly accessible.
2003-08-25
KENNEDY SPACE CENTER, FLA. - The master assembler, crane crew, removes a five-meter telescope in Cocoa Beach, Fla., for repair. The tracking telescope is part of the Distant Object Attitude Measurement System (DOAMS) that provides optical support for launches from KSC and Cape Canaveral.
Real-time acquisition and tracking system with multiple Kalman filters
NASA Astrophysics Data System (ADS)
Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.
1994-07-01
The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.
A framework for activity detection in wide-area motion imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D
2009-01-01
Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less
Using LabView for real-time monitoring and tracking of multiple biological objects
NASA Astrophysics Data System (ADS)
Nikolskyy, Aleksandr I.; Krasilenko, Vladimir G.; Bilynsky, Yosyp Y.; Starovier, Anzhelika
2017-04-01
Today real-time studying and tracking of movement dynamics of various biological objects is important and widely researched. Features of objects, conditions of their visualization and model parameters strongly influence the choice of optimal methods and algorithms for a specific task. Therefore, to automate the processes of adaptation of recognition tracking algorithms, several Labview project trackers are considered in the article. Projects allow changing templates for training and retraining the system quickly. They adapt to the speed of objects and statistical characteristics of noise in images. New functions of comparison of images or their features, descriptors and pre-processing methods will be discussed. The experiments carried out to test the trackers on real video files will be presented and analyzed.
Interplanetary Dust Observations by the Juno MAG Investigation
NASA Astrophysics Data System (ADS)
Jørgensen, John; Benn, Mathias; Denver, Troelz; Connerney, Jack; Jørgensen, Peter; Bolton, Scott; Brauer, Peter; Levin, Steven; Oliversen, Ronald
2017-04-01
The spin-stabilized and solar powered Juno spacecraft recently concluded a 5-year voyage through the solar system en route to Jupiter, arriving on July 4th, 2016. During the cruise phase from Earth to the Jovian system, the Magnetometer investigation (MAG) operated two magnetic field sensors and four co-located imaging systems designed to provide accurate attitude knowledge for the MAG sensors. One of these four imaging sensors - camera "D" of the Advanced Stellar Compass (ASC) - was operated in a mode designed to detect all luminous objects in its field of view, recording and characterizing those not found in the on-board star catalog. The capability to detect and track such objects ("non-stellar objects", or NSOs) provides a unique opportunity to sense and characterize interplanetary dust particles. The camera's detection threshold was set to MV9 to minimize false detections and discourage tracking of known objects. On-board filtering algorithms selected only those objects tracked through more than 5 consecutive images and moving with an apparent angular rate between 15"/s and 10,000"/s. The coordinates (RA, DEC), intensity, and apparent velocity of such objects were stored for eventual downlink. Direct detection of proximate dust particles is precluded by their large (10-30 km/s) relative velocity and extreme angular rates, but their presence may be inferred using the collecting area of Juno's large ( 55m2) solar arrays. Dust particles impact the spacecraft at high velocity, creating an expanding plasma cloud and ejecta with modest (few m/s) velocities. These excavated particles are revealed in reflected sunlight and tracked moving away from the spacecraft from the point of impact. Application of this novel detection method during Juno's traversal of the solar system provides new information on the distribution of interplanetary (µm-sized) dust.
DOT National Transportation Integrated Search
1971-01-01
The objective of a power collection system is to deliver uninterrupted power from the wayside to a vehicle. In order to apply the third rail concept, used for subway power collection, to the tracked air cushion vehicle, considerable improvement must ...
Three-dimensional microscope tracking system using the astigmatic lens method and a profile sensor
NASA Astrophysics Data System (ADS)
Kibata, Hiroki; Ishii, Katsuhiro
2018-03-01
We developed a three-dimensional microscope tracking system using the astigmatic lens method and a profile sensor, which provides three-dimensional position detection over a wide range at the rate of 3.2 kHz. First, we confirmed the range of target detection of the developed system, where the range of target detection was shown to be ± 90 µm in the horizontal plane and ± 9 µm in the vertical plane for a 10× objective lens. Next, we attempted to track a motion-controlled target. The developed system kept the target at the center of the field of view and in focus up to a target speed of 50 µm/s for a 20× objective lens. Finally, we tracked a freely moving target. We successfully demonstrated the tracking of a 10-µm-diameter polystyrene bead suspended in water for 40 min. The target was kept in the range of approximately 4.9 µm around the center of the field of view. In addition, the vertical direction was maintained in the range of ± 0.84 µm, which was sufficiently within the depth of focus.
Maintenance of the catalog of artificial objects in space.
NASA Astrophysics Data System (ADS)
Khutorovskij, Z. N.
1994-01-01
The catalog of artificial objects in space (AOS) is useful for estimating the safety of space flights, for constructing temporal and spatial models of the flux of AOS, for determining when and where dangerous AOS will break up, for tracking inoperative instruments and space stations, for eliminating false alarms that are triggered by observations of AOS in the Ballistic Missile Early Warning System and in the Anti-Missile system, etc. At present, the Space Surveillance System (located in the former USSR) automatically maintains a catalog consisting of more than 5000 AOS with dimensions of at least 10 cm. The orbital parameters are continuously updated from radar tracking data. The author describes the software which is used to process the information. He presents some of the features of the system itself, including the number of objects in various stages of the tracking process, the orbital parameters of AOS which break up, and how the fragments are detected, the accuracy of tracking and predicting the orbits of the AOS, and the accuracy with which we can estimate when and where an AOS will break up. As an example, the author presents the results of determination of the time when the orbiting complex Salyut-7 - Kosmos-1686 will break up, and where it will impact.
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Image sequence analysis workstation for multipoint motion analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-08-01
This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.
Tracking multiple objects is limited only by object spacing, not by speed, time, or capacity.
Franconeri, S L; Jonathan, S V; Scimeca, J M
2010-07-01
In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.
2006-11-01
Asset tracking systems are used in healthcare to find objects--medical devices and other hospital equipment--and to record the physical location of those objects over time. Interest in asset tracking is growing daily, but the technology is still evolving, and so far very few systems have been implemented in hospitals. This situation is likely to change over the next few years, at which point many hospitals will be faced with choosing a system. We evaluated four asset tracking systems from four suppliers: Agility Healthcare Solutions, Ekahau, Radianse, and Versus Technology. We judged the systems' performance for two "levels" of asset tracking. The first level is basic locating--simply determining where in the facility an item can be found. This may be done because the equipment needs routine inspection and preventive maintenance or because it is required for recall purposes; or the equipment may be needed, often urgently, for clinical use. The second level, which is much more involved, is inventory optimization and workflow improvement. This entails analyzing asset utilization based on historical location data to improve the use, distribution, and processing of equipment. None of the evaluated products is ideal for all uses--each has strengths and weaknesses. In many cases, hospitals will have to select a product based on their specific needs. For example, they may need to choose between a supplier whose system is easy to install and a supplier whose tags have a long battery operating life.
The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Kalal, Zdenek; Mikolajczyk, Krystian; Matas, Jiri
2012-07-01
This paper investigates long-term tracking of unknown objects in a video stream. The object is defined by its location and extent in a single frame. In every frame that follows, the task is to determine the object's location and extent or indicate that the object is not present. We propose a novel tracking framework (TLD) that explicitly decomposes the long-term tracking task into tracking, learning, and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary. The learning estimates the detector's errors and updates it to avoid these errors in the future. We study how to identify the detector's errors and learn from them. We develop a novel learning method (P-N learning) which estimates the errors by a pair of "experts": (1) P-expert estimates missed detections, and (2) N-expert estimates false alarms. The learning process is modeled as a discrete dynamical system and the conditions under which the learning guarantees improvement are found. We describe our real-time implementation of the TLD framework and the P-N learning. We carry out an extensive quantitative evaluation which shows a significant improvement over state-of-the-art approaches.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System
2016-01-01
This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165
System and method for tracking a signal source. [employing feedback control
NASA Technical Reports Server (NTRS)
Mogavero, L. N.; Johnson, E. G.; Evans, J. M., Jr.; Albus, J. S. (Inventor)
1978-01-01
A system for tracking moving signal sources is disclosed which is particularly adaptable for use in tracking stage performers. A miniature transmitter is attached to the person or object to be tracked and emits a detectable signal of a predetermined frequency. A plurality of detectors positioned in a preset pattern sense the signal and supply output information to a phase detector which applies signals representing the angular orientation of the transmitter to a computer. The computer provides command signals to a servo network which drives a device such as a motor driven mirror reflecting the beam of a spotlight, to track the moving transmitter.
NASA Astrophysics Data System (ADS)
Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen
2017-06-01
Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.
Tenure Track System in Higher Education Institutions of Pakistan: Prospects and Challenges
ERIC Educational Resources Information Center
Khan, Tayyeb Ali; Jabeen, Nasira
2011-01-01
Tenure track system (TTS) was introduced in higher education institutions of Pakistan in 2002 as part of administrative reforms. The main objectives of the reform were to improve performance of higher education in the country through attracting qualified people and improving performance of academic faculty of higher education institutions…
Study of moving object detecting and tracking algorithm for video surveillance system
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhang, Rongfu
2010-10-01
This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.
Design and implementation of a vision-based hovering and feature tracking algorithm for a quadrotor
NASA Astrophysics Data System (ADS)
Lee, Y. H.; Chahl, J. S.
2016-10-01
This paper demonstrates an approach to the vision-based control of the unmanned quadrotors for hover and object tracking. The algorithms used the Speed Up Robust Features (SURF) algorithm to detect objects. The pose of the object in the image was then calculated in order to pass the pose information to the flight controller. Finally, the flight controller steered the quadrotor to approach the object based on the calculated pose data. The above processes was run using standard onboard resources found in the 3DR Solo quadrotor in an embedded computing environment. The obtained results showed that the algorithm behaved well during its missions, tracking and hovering, although there were significant latencies due to low CPU performance of the onboard image processing system.
Method for targetless tracking subpixel in-plane movements.
Espinosa, Julian; Perez, Jorge; Ferrer, Belen; Mas, David
2015-09-01
We present a targetless motion tracking method for detecting planar movements with subpixel accuracy. This method is based on the computation and tracking of the intersection of two nonparallel straight-line segments in the image of a moving object in a scene. The method is simple and easy to implement because no complex structures have to be detected. It has been tested and validated using a lab experiment consisting of a vibrating object that was recorded with a high-speed camera working at 1000 fps. We managed to track displacements with an accuracy of hundredths of pixel or even of thousandths of pixel in the case of tracking harmonic vibrations. The method is widely applicable because it can be used for distance measuring amplitude and frequency of vibrations with a vision system.
A Scalable Distributed Approach to Mobile Robot Vision
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.
1997-01-01
This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).
Location detection and tracking of moving targets by a 2D IR-UWB radar system.
Nguyen, Van-Han; Pyun, Jae-Young
2015-03-19
In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.
The Establishment of a Formal Midwest Renewable Energy Tracking System (M-RETS) Organization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maria Redmond; Chela Bordas O'Connor
2010-06-30
The objectives identified in requesting and utilizing this funding has been met. The goal was to establish a formal, multi-jurisdictional organization to: (1) ensure the policy objectives of the participating jurisdictions are addressed through increased tradability of the Renewable Energy Credits (RECs) from M-RETS and to eliminate the possibility that a single jurisdiction will be the sole arbiter of the operation of the system; (2) facilitate the establishment of REC standards including the attributes related to, the creation, trading, and interaction with other trading and tracking systems; and (3) have a centralized and established organization that will be responsible formore » the contracting and governance responsibilities of a multi-jurisdictional tracking system. The M-RETS Inc. Board ensures that the system remains policy neutral; that the attributes of generation are tracked in a way that allows the system users to easily identify and trade relevant RECs; that the system can add jurisdictions as needed or desired; and that the tracking system operate in such a way to allow for the greatest access possible for those participating in other tracking or trading systems by allowing those systems to negotiate with a single M-RETS entity for the import and export of RECs. M-RETS as an organizational body participates and often leads the discussions related to the standardization of RECs and increasing the tradability of M-RETS RECs. M-RETS is a founding member of the Environmental Trading Network of North America (ETNNA) and continues to take a leadership role in the development of processes to facilitate trading among tracking systems and to standardize REC definitions. The Board of Directors of M-RETS, Inc., the non-profit corporation, continues to hold telephone/internet Board meetings. Legal counsel continues working with the board and APX management on a new agreement with APX. The board expects to have an agreement and corresponding fee structure in place by January 2011. The Board has recently approved exports to three other tracking systems and is in discussions about imports to the system. Below are the tasks outlined in the request and attached you will find the relevant documentation.« less
Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas
2008-01-01
PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.
NASA Technical Reports Server (NTRS)
Tonkay, Gregory
1990-01-01
The following separate topics are addressed: (1) improving a robotic tracking system; and (2) providing insights into orbiter position calibration for radiator inspection. The objective of the tracking system project was to provide the capability to track moving targets more accurately by adjusting parameters in the control system and implementing a predictive algorithm. A computer model was developed to emulate the tracking system. Using this model as a test bed, a self-tuning algorithm was developed to tune the system gains. The model yielded important findings concerning factors that affect the gains. The self-tuning algorithms will provide the concepts to write a program to automatically tune the gains in the real system. The section concerning orbiter position calibration provides a comparison to previous work that had been performed for plant growth. It provided the conceptualized routines required to visually determine the orbiter position and orientation. Furthermore, it identified the types of information which are required to flow between the robot controller and the vision system.
Tracking and Data System Support for the Mariner Venus/Mercury 1973 Project
NASA Technical Reports Server (NTRS)
Davis, E. K.; Traxler, M. R.
1977-01-01
The Tracking and Data System, which provided outstanding support to the Mariner Venus/Mercury 1973 project during the period from January 1970 through March 1975 are chronologically described. In the Tracking and Data System organizations, plans, processes, and technical configurations, which were developed and employed to facilitate achievement of mission objectives, are described. In the Deep Space Network position of the tracking and data system, a number of special actions were taken to greatly increase the scientific data return and to assist the project in coping with in-flight problems. The benefits of such actions were high; however, there was also a significant increase in risk as a function of the experimental equipment and procedures required.
Development of a real time multiple target, multi camera tracker for civil security applications
NASA Astrophysics Data System (ADS)
Åkerlund, Hans
2009-09-01
A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.
NASA Astrophysics Data System (ADS)
Wray, J. D.
2003-05-01
The robotic observatory telescope must point precisely on the target object, and then track autonomously to a fraction of the FWHM of the system PSF for durations of ten to twenty minutes or more. It must retain this precision while continuing to function at rates approaching thousands of observations per night for all its years of useful life. These stringent requirements raise new challenges unique to robotic telescope systems design. Critical design considerations are driven by the applicability of the above requirements to all systems of the robotic observatory, including telescope and instrument systems, telescope-dome enclosure systems, combined electrical and electronics systems, environmental (e.g. seeing) control systems and integrated computer control software systems. Traditional telescope design considerations include the effects of differential thermal strain, elastic flexure, plastic flexure and slack or backlash with respect to focal stability, optical alignment and angular pointing and tracking precision. Robotic observatory design must holistically encapsulate these traditional considerations within the overall objective of maximized long-term sustainable precision performance. This overall objective is accomplished through combining appropriate mechanical and dynamical system characteristics with a full-time real-time telescope mount model feedback computer control system. Important design considerations include: identifying and reducing quasi-zero-backlash; increasing size to increase precision; directly encoding axis shaft rotation; pointing and tracking operation via real-time feedback between precision mount model and axis mounted encoders; use of monolithic construction whenever appropriate for sustainable mechanical integrity; accelerating dome motion to eliminate repetitive shock; ducting internal telescope air to outside dome; and the principal design criteria: maximizing elastic repeatability while minimizing slack, plastic deformation and hysteresis to facilitate long-term repeatably precise pointing and tracking performance.
High resolution imaging of a subsonic projectile using automated mirrors with large aperture
NASA Astrophysics Data System (ADS)
Tateno, Y.; Ishii, M.; Oku, H.
2017-02-01
Visual tracking of high-speed projectiles is required for studying the aerodynamics around the objects. One solution to this problem is a tracking method based on the so-called 1 ms Auto Pan-Tilt (1ms-APT) system that we proposed in previous work, which consists of rotational mirrors and a high-speed image processing system. However, the images obtained with that system did not have high enough resolution to realize detailed measurement of the projectiles because of the size of the mirrors. In this study, we propose a new system consisting of enlarged mirrors for tracking a high-speed projectiles so as to achieve higher-resolution imaging, and we confirmed the effectiveness of the system via an experiment in which a projectile flying at subsonic speed tracked.
Modular Mount Control System for Telescopes
NASA Astrophysics Data System (ADS)
Mooney, J.; Cleis, R.; Kyono, T.; Edwards, M.
The Space Observatory Control Kit (SpOCK) is the hardware, computers and software used to run small and large telescopes in the RDS division of the Air Force Research Laboratories (AFRL). The system is used to track earth satellites, celestial objects, terrestrial objects and aerial objects. The system will track general targets when provided with state vectors in one of five coordinate systems. Client-toserver and server-to-gimbals communication occurs via human-readable s-expressions that may be evaluated by the computer language called Racket. Software verification is achieved by scripts that exercise these expressions by sending them to the server, and receiving the expressions that the server evaluates. This paper describes the adaptation of a modular mount control system developed primarily for LEO satellite imaging on large and small portable AFRL telescopes with a goal of orbit determination and the generation of satellite metrics.
Active illuminated space object imaging and tracking simulation
NASA Astrophysics Data System (ADS)
Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu
2016-10-01
Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.
Model of ballistic targets' dynamics used for trajectory tracking algorithms
NASA Astrophysics Data System (ADS)
Okoń-FÄ fara, Marta; Kawalec, Adam; Witczak, Andrzej
2017-04-01
There are known only few ballistic object tracking algorithms. To develop such algorithms and to its further testing, it is necessary to implement possibly simple and reliable objects' dynamics model. The article presents the dynamics' model of a tactical ballistic missile (TBM) including the three stages of flight: the boost stage and two passive stages - the ascending one and the descending one. Additionally, the procedure of transformation from the local coordinate system to the polar-radar oriented and the global is presented. The prepared theoretical data may be used to determine the tracking algorithm parameters and to its further verification.
Serrano-Gotarredona, Rafael; Oster, Matthias; Lichtsteiner, Patrick; Linares-Barranco, Alejandro; Paz-Vicente, Rafael; Gomez-Rodriguez, Francisco; Camunas-Mesa, Luis; Berner, Raphael; Rivas-Perez, Manuel; Delbruck, Tobi; Liu, Shih-Chii; Douglas, Rodney; Hafliger, Philipp; Jimenez-Moreno, Gabriel; Civit Ballcels, Anton; Serrano-Gotarredona, Teresa; Acosta-Jimenez, Antonio J; Linares-Barranco, Bernabé
2009-09-01
This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.
Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.
Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2014-08-01
The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.
Design, implementation and accuracy of a prototype for medical augmented reality.
Pandya, Abhilash; Siadat, Mohammad-Reza; Auner, Greg
2005-01-01
This paper is focused on prototype development and accuracy evaluation of a medical Augmented Reality (AR) system. The accuracy of such a system is of critical importance for medical use, and is hence considered in detail. We analyze the individual error contributions and the system accuracy of the prototype. A passive articulated arm is used to track a calibrated end-effector-mounted video camera. The live video view is superimposed in real time with the synchronized graphical view of CT-derived segmented object(s) of interest within a phantom skull. The AR accuracy mostly depends on the accuracy of the tracking technology, the registration procedure, the camera calibration, and the image scanning device (e.g., a CT or MRI scanner). The accuracy of the Microscribe arm was measured to be 0.87 mm. After mounting the camera on the tracking device, the AR accuracy was measured to be 2.74 mm on average (standard deviation = 0.81 mm). After using data from a 2-mm-thick CT scan, the AR error remained essentially the same at an average of 2.75 mm (standard deviation = 1.19 mm). For neurosurgery, the acceptable error is approximately 2-3 mm, and our prototype approaches these accuracy requirements. The accuracy could be increased with a higher-fidelity tracking system and improved calibration and object registration. The design and methods of this prototype device can be extrapolated to current medical robotics (due to the kinematic similarity) and neuronavigation systems.
2014-01-01
Background Using the Android platform as a notification instrument for diseases and disorders forms a new alternative for computerization of epidemiological studies. Objective The objective of our study was to construct a tool for gathering epidemiological data on schistosomiasis using the Android platform. Methods The developed application (app), named the Schisto Track, is a tool for data capture and analysis that was designed to meet the needs of a traditional epidemiological survey. An initial version of the app was finished and tested in both real situations and simulations for epidemiological surveys. Results The app proved to be a tool capable of automation of activities, with data organization and standardization, easy data recovery (to enable interfacing with other systems), and totally modular architecture. Conclusions The proposed Schisto Track is in line with worldwide trends toward use of smartphones with the Android platform for modeling epidemiological scenarios. PMID:25099881
Sarlegna, Fabrice R; Baud-Bovy, Gabriel; Danion, Frédéric
2010-08-01
When we manipulate an object, grip force is adjusted in anticipation of the mechanical consequences of hand motion (i.e., load force) to prevent the object from slipping. This predictive behavior is assumed to rely on an internal representation of the object dynamic properties, which would be elaborated via visual information before the object is grasped and via somatosensory feedback once the object is grasped. Here we examined this view by investigating the effect of delayed visual feedback during dextrous object manipulation. Adult participants manually tracked a sinusoidal target by oscillating a handheld object whose current position was displayed as a cursor on a screen along with the visual target. A delay was introduced between actual object displacement and cursor motion. This delay was linearly increased (from 0 to 300 ms) and decreased within 2-min trials. As previously reported, delayed visual feedback altered performance in manual tracking. Importantly, although the physical properties of the object remained unchanged, delayed visual feedback altered the timing of grip force relative to load force by about 50 ms. Additional experiments showed that this effect was not due to task complexity nor to manual tracking. A model inspired by the behavior of mass-spring systems suggests that delayed visual feedback may have biased the representation of object dynamics. Overall, our findings support the idea that visual feedback of object motion can influence the predictive control of grip force even when the object is grasped.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovesdi, C.; Spielman, Z.; LeBlanc, K.
An important element of human factors engineering (HFE) pertains to measurement and evaluation (M&E). The role of HFE-M&E should be integrated throughout the entire control room modernization (CRM) process and be used for human-system performance evaluation and diagnostic purposes with resolving potential human engineering deficiencies (HEDs) and other human machine interface (HMI) design issues. NUREG-0711 describes how HFE in CRM should employ a hierarchical set of measures, particularly during integrated system validation (ISV), including plant performance, personnel task performance, situation awareness, cognitive workload, and anthropometric/ physiological factors. Historically, subjective measures have been primarily used since they are easier to collectmore » and do not require specialized equipment. However, there are pitfalls with relying solely on subjective measures in M&E such that negatively impact reliability, sensitivity, and objectivity. As part of comprehensively capturing a diverse set of measures that strengthen findings and inferences made of the benefits from emerging technologies like advanced displays, this paper discusses the value of using eye tracking as an objective method that can be used in M&E. A brief description of eye tracking technology and relevant eye tracking measures is provided. Additionally, technical considerations and the unique challenges with using eye tracking in full-scaled simulations are addressed. Finally, this paper shares preliminary findings regarding the use of a wearable eye tracking system in a full-scale simulator study. These findings should help guide future full-scale simulator studies using eye tracking as a methodology to evaluate human-system performance.« less
Self-motion impairs multiple-object tracking.
Thomas, Laura E; Seiffert, Adriane E
2010-10-01
Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.
Schaarup, Clara; Hartvigsen, Gunnar; Larsen, Lars Bo; Tan, Zheng-Hua; Årsand, Eirik; Hejlesen, Ole Kristian
2015-01-01
The Online Diabetes Exercise System was developed to motivate people with Type 2 diabetes to do a 25 minutes low-volume high-intensity interval training program. In a previous multi-method evaluation of the system, several usability issues were identified and corrected. Despite the thorough testing, it was unclear whether all usability problems had been identified using the multi-method evaluation. Our hypothesis was that adding the eye-tracking triangulation to the multi-method evaluation would increase the accuracy and completeness when testing the usability of the system. The study design was an Eye-tracking Triangulation; conventional eye-tracking with predefined tasks followed by The Post-Experience Eye-Tracked Protocol (PEEP). Six Areas of Interests were the basis for the PEEP-session. The eye-tracking triangulation gave objective and subjective results, which are believed to be highly relevant for designing, implementing, evaluating and optimizing systems in the field of health informatics. Future work should include testing the method on a larger and more representative group of users and apply the method on different system types.
NASA Astrophysics Data System (ADS)
Petrochenko, Andrey; Konyakhin, Igor
2017-06-01
In connection with the development of robotics have become increasingly popular variety of three-dimensional reconstruction of the system mapping and image-set received from the optical sensors. The main objective of technical and robot vision is the detection, tracking and classification of objects of the space in which these systems and robots operate [15,16,18]. Two-dimensional images sometimes don't contain sufficient information to address those or other problems: the construction of the map of the surrounding area for a route; object identification, tracking their relative position and movement; selection of objects and their attributes to complement the knowledge base. Three-dimensional reconstruction of the surrounding space allows you to obtain information on the relative positions of objects, their shape, surface texture. Systems, providing training on the basis of three-dimensional reconstruction of the results of the comparison can produce two-dimensional images of three-dimensional model that allows for the recognition of volume objects on flat images. The problem of the relative orientation of industrial robots with the ability to build threedimensional scenes of controlled surfaces is becoming actual nowadays.
Optimal Configuration of Human Motion Tracking Systems: A Systems Engineering Approach
NASA Technical Reports Server (NTRS)
Henderson, Steve
2005-01-01
Human motion tracking systems represent a crucial technology in the area of modeling and simulation. These systems, which allow engineers to capture human motion for study or replication in virtual environments, have broad applications in several research disciplines including human engineering, robotics, and psychology. These systems are based on several sensing paradigms, including electro-magnetic, infrared, and visual recognition. Each of these paradigms requires specialized environments and hardware configurations to optimize performance of the human motion tracking system. Ideally, these systems are used in a laboratory or other facility that was designed to accommodate the particular sensing technology. For example, electromagnetic systems are highly vulnerable to interference from metallic objects, and should be used in a specialized lab free of metal components.
NASA Technical Reports Server (NTRS)
Marr, Greg C.; Maher, Michael; Blizzard, Michael; Showell, Avanaugh; Asher, Mark; Devereux, Will
2004-01-01
Over an approximately 48-hour period from September 26 to 28,2002, the Thermosphere, Ionosphere, Mesosphere, Energetics and Dynamics (TIMED) mission was intensively supported by the Tracking and Data Relay Satellite System (TDRSS). The TIMED satellite is in a nearly circular low-Earth orbit with a semimajor axis of approximately 7000 km and an inclination of approximately 74 degrees. The objective was to provide TDRSS tracking support for orbit determination (OD) to generate a definitive ephemeris of 24-hour duration or more with a 3-sigma position error no greater than 100 meters, and this tracking campaign was successful. An ephemeris was generated by Goddard Space Flight Center (GSFC) personnel using the TDRSS tracking data and was compared with an ephemeris generated by the Johns Hopkins University's Applied Physics Lab (APL) using TIMED Global Positioning System (GPS) data. Prior to the tracking campaign OD error analysis was performed to justify scheduling the TDRSS support.
The Role of Visual Working Memory in Attentive Tracking of Unique Objects
ERIC Educational Resources Information Center
Makovski, Tal; Jiang, Yuhong V.
2009-01-01
When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…
ERIC Educational Resources Information Center
Markham, Paula T.; Porter, Bryan E.; Ball, J. D.
2013-01-01
Objective: In this article, the authors investigated the effectiveness of a behavior modification program using global positioning system (GPS) vehicle tracking devices with contingency incentives and disincentives to reduce the speeding behavior of drivers with ADHD. Method: Using an AB multiple-baseline design, six participants drove a 5-mile…
NASA Astrophysics Data System (ADS)
Huang, Xiaomeng; Hu, Chenqi; Huang, Xing; Chu, Yang; Tseng, Yu-heng; Zhang, Guang Jun; Lin, Yanluan
2018-01-01
Mesoscale convective systems (MCSs) are important components of tropical weather systems and the climate system. Long-term data of MCS are of great significance in weather and climate research. Using long-term (1985-2008) global satellite infrared (IR) data, we developed a novel objective automatic tracking algorithm, which combines a Kalman filter (KF) with the conventional area-overlapping method, to generate a comprehensive MCS dataset. The new algorithm can effectively track small and fast-moving MCSs and thus obtain more realistic and complete tracking results than previous studies. A few examples are provided to illustrate the potential application of the dataset with a focus on the diurnal variations of MCSs over land and ocean regions. We find that the MCSs occurring over land tend to initiate in the afternoon with greater intensity, but the oceanic MCSs are more likely to initiate in the early morning with weaker intensity. A double peak in the maximum spatial coverage is noted over the western Pacific, especially over the southwestern Pacific during the austral summer. Oceanic MCSs also persist for approximately 1 h longer than their continental counterparts.
360-Degree Visual Detection and Target Tracking on an Autonomous Surface Vehicle
NASA Technical Reports Server (NTRS)
Wolf, Michael T; Assad, Christopher; Kuwata, Yoshiaki; Howard, Andrew; Aghazarian, Hrand; Zhu, David; Lu, Thomas; Trebi-Ollennu, Ashitey; Huntsberger, Terry
2010-01-01
This paper describes perception and planning systems of an autonomous sea surface vehicle (ASV) whose goal is to detect and track other vessels at medium to long ranges and execute responses to determine whether the vessel is adversarial. The Jet Propulsion Laboratory (JPL) has developed a tightly integrated system called CARACaS (Control Architecture for Robotic Agent Command and Sensing) that blends the sensing, planning, and behavior autonomy necessary for such missions. Two patrol scenarios are addressed here: one in which the ASV patrols a large harbor region and checks for vessels near a fixed asset on each pass and one in which the ASV circles a fixed asset and intercepts approaching vessels. This paper focuses on the ASV's central perception and situation awareness system, dubbed Surface Autonomous Visual Analysis and Tracking (SAVAnT), which receives images from an omnidirectional camera head, identifies objects of interest in these images, and probabilistically tracks the objects' presence over time, even as they may exist outside of the vehicle's sensor range. The integrated CARACaS/SAVAnT system has been implemented on U.S. Navy experimental ASVs and tested in on-water field demonstrations.
Visual attention is required for multiple object tracking.
Tran, Annie; Hoffman, James E
2016-12-01
In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Peter J.; Edson, Patrick L.
2013-12-20
This project saw the completion of the design and development of a second generation, high frequency (90-120 kHz) Subsurface-Threat Detection Sonar Network (SDSN). The system was deployed, operated, and tested in Cobscook Bay, Maine near the site the Ocean Renewable Power Company TidGen™ power unit. This effort resulted in a very successful demonstration of the SDSN detection, tracking, localization, and classification capabilities in a high current, MHK environment as measured by results from the detection and tracking trials in Cobscook Bay. The new high frequency node, designed to operate outside the hearing range of a subset of marine mammals, wasmore » shown to detect and track objects of marine mammal-like target strength to ranges of approximately 500 meters. This performance range results in the SDSN system tracking objects for a significant duration - on the order of minutes - even in a tidal flow of 5-7 knots, potentially allowing time for MHK system or operator decision-making if marine mammals are present. Having demonstrated detection and tracking of synthetic targets with target strengths similar to some marine mammals, the primary hurdle to eventual automated monitoring is a dataset of actual marine mammal kinematic behavior and modifying the tracking algorithms and parameters which are currently tuned to human diver kinematics and classification.« less
Improvements in Space Surveillance Processing for Wide Field of View Optical Sensors
NASA Astrophysics Data System (ADS)
Sydney, P.; Wetterer, C.
2014-09-01
For more than a decade, an autonomous satellite tracking system at the Air Force Maui Optical and Supercomputing (AMOS) observatory has been generating routine astrometric measurements of Earth-orbiting Resident Space Objects (RSOs) using small commercial telescopes and sensors. Recent work has focused on developing an improved processing system, enhancing measurement performance and response while supporting other sensor systems and missions. This paper will outline improved techniques in scheduling, detection, astrometric and photometric measurements, and catalog maintenance. The processing system now integrates with Special Perturbation (SP) based astrodynamics algorithms, allowing covariance-based scheduling and more precise orbital estimates and object identification. A merit-based scheduling algorithm provides a global optimization framework to support diverse collection tasks and missions. The detection algorithms support a range of target tracking and camera acquisition rates. New comprehensive star catalogs allow for more precise astrometric and photometric calibrations including differential photometry for monitoring environmental changes. This paper will also examine measurement performance with varying tracking rates and acquisition parameters.
Kim, Young-Keun; Kim, Kyung-Soo
2014-10-01
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young-Keun, E-mail: ykkim@handong.edu; Kim, Kyung-Soo
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-basedmore » sensor, the system is expected to be highly robust to sea weather conditions.« less
NASA Astrophysics Data System (ADS)
Kim, Young-Keun; Kim, Kyung-Soo
2014-10-01
Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.
NASA Astrophysics Data System (ADS)
Torteeka, Peerapong; Gao, Peng-Qi; Shen, Ming; Guo, Xiao-Zhang; Yang, Da-Tao; Yu, Huan-Huan; Zhou, Wei-Ping; Zhao, You
2017-02-01
Although tracking with a passive optical telescope is a powerful technique for space debris observation, it is limited by its sensitivity to dynamic background noise. Traditionally, in the field of astronomy, static background subtraction based on a median image technique has been used to extract moving space objects prior to the tracking operation, as this is computationally efficient. The main disadvantage of this technique is that it is not robust to variable illumination conditions. In this article, we propose an approach for tracking small and dim space debris in the context of a dynamic background via one of the optical telescopes that is part of the space surveillance network project, named the Asia-Pacific ground-based Optical Space Observation System or APOSOS. The approach combines a fuzzy running Gaussian average for robust moving-object extraction with dim-target tracking using a particle-filter-based track-before-detect method. The performance of the proposed algorithm is experimentally evaluated, and the results show that the scheme achieves a satisfactory level of accuracy for space debris tracking.
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J
2014-09-26
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions.
Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.
2014-01-01
This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956
Versatile resonance-tracking circuit for acoustic levitation experiments.
Baxter, K; Apfel, R E; Marston, P L
1978-02-01
Objects can be levitated by radiation pressure forces in an acoustic standing wave. In many circumstances it is important that the standing wave frequency remain locked on an acoustic resonance despite small changes in the resonance frequency. A self-locking oscillator circuit is described which tracks the resonance frequency by sensing the magnitude of the transducer current. The tracking principle could be applied to other resonant systems.
Study of a Tracking and Data Acquisition System (TDAS) in the 1990's
NASA Technical Reports Server (NTRS)
1981-01-01
Progress in concept definition studies, operational assessments, and technology demonstrations for the Tracking and Data Acquisition System (TDAS) is reported. The proposed TDAS will be the follow-on to the Tracking and Data Relay Satellite System and will function as a key element of the NASA End-to-End Data System, providing the tracking and data acquisition interface between user accessible data ports on Earth and the user's spaceborne equipment. Technical activities of the "spacecraft data system architecture' task and the "communication mission model' task are emphasized. The objective of the first task is to provide technology forecasts for sensor data handling, navigation and communication systems, and estimate corresponding costs. The second task is concerned with developing a parametric description of the required communication channels. Other tasks with significant activity include the "frequency plan and radio interference model' and the "Viterbi decoder/simulator study'.
RAPTOR-scan: Identifying and Tracking Objects Through Thousands of Sky Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidoff, Sherri; Wozniak, Przemyslaw
2004-09-28
The RAPTOR-scan system mines data for optical transients associated with gamma-ray bursts and is used to create a catalog for the RAPTOR telescope system. RAPTOR-scan can detect and track individual astronomical objects across data sets containing millions of observed points.Accurately identifying a real object over many optical images (clustering the individual appearances) is necessary in order to analyze object light curves. To achieve this, RAPTOR telescope observations are sent in real time to a database. Each morning, a program based on the DBSCAN algorithm clusters the observations and labels each one with an object identifier. Once clustering is complete, themore » analysis program may be used to query the database and produce light curves, maps of the sky field, or other informative displays.Although RAPTOR-scan was designed for the RAPTOR optical telescope system, it is a general tool designed to identify objects in a collection of astronomical data and facilitate quick data analysis. RAPTOR-scan will be released as free software under the GNU General Public License.« less
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Multithreaded hybrid feature tracking for markerless augmented reality.
Lee, Taehee; Höllerer, Tobias
2009-01-01
We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
Adaptive object tracking via both positive and negative models matching
NASA Astrophysics Data System (ADS)
Li, Shaomei; Gao, Chao; Wang, Yawen
2015-03-01
To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as abinary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm can not only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.
How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking
Thomas, Laura E.; Seiffert, Adriane E.
2011-01-01
Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua-Kuang (Inventor)
1990-01-01
System for optically recognizing and tracking a plurality of objects within a field of vision. Laser (46) produces a coherent beam (48). Beam splitter (24) splits the beam into object (26) and reference (28) beams. Beam expanders (50) and collimators (52) transform the beams (26, 28) into coherent collimated light beams (26', 28'). A two-dimensional SLM (54), disposed in the object beam (26'), modulates the object beam with optical information as a function of signals from a first camera (16) which develops X and Y signals reflecting the contents of its field of vision. A hololens (38), positioned in the object beam (26') subsequent to the modulator (54), focuses the object beam at a plurality of focal points (42). A planar transparency-forming film (32), disposed with the focal points on an exposable surface, forms a multiple position interference filter (62) upon exposure of the surface and development processing of the film (32). A reflector (53) directing the reference beam (28') onto the film (32), exposes the surface, with images focused by the hololens (38), to form interference patterns on the surface. There is apparatus (16', 64) for sensing and indicating light passage through respective ones of the positions of the filter (62), whereby recognition of objects corresponding to respective ones of the positions of the filter (62) is affected. For tracking, apparatus (64) focuses light passing through the filter (62) onto a matrix of CCD's in a second camera (16') to form a two-dimensional display of the recognized objects.
Real-time moving objects detection and tracking from airborne infrared camera
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2017-10-01
Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.
The Mathematics of Go to Telescopes
ERIC Educational Resources Information Center
Teets, Donald
2007-01-01
This article presents the mathematics involved in finding and tracking celestial objects with an electronically controlled telescope. The essential idea in solving this problem is to choose several different coordinate systems that simplify the various motions of the earth and other celestial objects. These coordinate systems are then related by…
Identifying Objects via Encased X-Ray-Fluorescent Materials - the Bar Code Inside
NASA Technical Reports Server (NTRS)
Schramm, Harry F.; Kaiser, Bruce
2005-01-01
Systems for identifying objects by means of x-ray fluorescence (XRF) of encased labeling elements have been developed. The XRF spectra of objects so labeled would be analogous to the external bar code labels now used to track objects in everyday commerce. In conjunction with computer-based tracking systems, databases, and labeling conventions, the XRF labels could be used in essentially the same manner as that of bar codes to track inventories and to record and process commercial transactions. In addition, as summarized briefly below, embedded XRF labels could be used to verify the authenticity of products, thereby helping to deter counterfeiting and fraud. A system, as described above, is called an encased core product identification and authentication system (ECPIAS). The ECPIAS concept is a modified version of that of a related recently initiated commercial development of handheld XRF spectral scanners that would identify alloys or detect labeling elements deposited on the surfaces of objects. In contrast, an ECPIAS would utilize labeling elements encased within the objects of interest. The basic ECPIAS concept is best illustrated by means of an example of one of several potential applications: labeling of cultured pearls by labeling the seed particles implanted in oysters to grow the pearls. Each pearl farmer would be assigned a unique mixture of labeling elements that could be distinguished from the corresponding mixtures of other farmers. The mixture would be either incorporated into or applied to the surfaces of the seed prior to implantation in the oyster. If necessary, the labeled seed would be further coated to make it nontoxic to the oyster. After implantation, the growth of layers of mother of pearl on the seed would encase the XRF labels, making these labels integral, permanent parts of the pearls that could not be removed without destroying the pearls themselves. The XRF labels would be read by use of XRF scanners, the spectral data outputs of which would be converted to alphanumeric data in a digital equivalent data system (DEDS), which is the subject of the previous article. These alphanumeric data would be used to track the pearls through all stages of commerce, from the farmer to the retail customer.
Detection and tracking of drones using advanced acoustic cameras
NASA Astrophysics Data System (ADS)
Busset, Joël.; Perrodin, Florian; Wellig, Peter; Ott, Beat; Heutschi, Kurt; Rühl, Torben; Nussbaumer, Thomas
2015-10-01
Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.
Automated object detection and tracking with a flash LiDAR system
NASA Astrophysics Data System (ADS)
Hammer, Marcus; Hebel, Marcus; Arens, Michael
2016-10-01
The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data distortion of most LiDAR systems. The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis, point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor. As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly volatile and rapid changes in the direction of motion the object is kept in the field of view. The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances (20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a boat or an UAV in various distances.
2009-09-23
CAPE CANAVERAL, Fla. – The mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station rolls back to reveal the United Launch Alliance Delta II rocket that will launch the Space Tracking and Surveillance System - Demonstrator into orbit. It is being launched by NASA for the Missile Defense System. The hour-long launch window opens at 8 a.m. EDT today. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Dimitri Gerondidakis
A preliminary experiment definition for video landmark acquisition and tracking
NASA Technical Reports Server (NTRS)
Schappell, R. T.; Tietz, J. C.; Hulstrom, R. L.; Cunningham, R. A.; Reel, G. M.
1976-01-01
Six scientific objectives/experiments were derived which consisted of agriculture/forestry/range resources, land use, geology/mineral resources, water resources, marine resources and environmental surveys. Computer calculations were then made of the spectral radiance signature of each of 25 candidate targets as seen by a satellite sensor system. An imaging system capable of recognizing, acquiring and tracking specific generic type surface features was defined. A preliminary experiment definition and design of a video Landmark Acquisition and Tracking system is given. This device will search a 10-mile swath while orbiting the earth, looking for land/water interfaces such as coastlines and rivers.
2009-09-23
CAPE CANAVERAL, Fla. – The mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station has been rolled back to reveal the United Launch Alliance Delta II rocket ready to launch the Space Tracking and Surveillance System - Demonstrator into orbit. It is being launched by NASA for the Missile Defense System. The hour-long launch window opens at 8 a.m. EDT today. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Dimitri Gerondidakis
2009-09-23
CAPE CANAVERAL, Fla. – The mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station has been rolled back to reveal the United Launch Alliance Delta II rocket that will launch the Space Tracking and Surveillance System - Demonstrator into orbit. It is being launched by NASA for the Missile Defense System. The hour-long launch window opens at 8 a.m. EDT today. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Dimitri Gerondidakis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Barhen, Jacob; Glover, Charles Wayne
2012-01-01
Multi-sensor networks may face resource limitations in a dynamically evolving multiple target tracking scenario. It is necessary to task the sensors efficiently so that the overall system performance is maximized within the system constraints. The central sensor resource manager may control the sensors to meet objective functions that are formulated to meet system goals such as minimization of track loss, maximization of probability of target detection, and minimization of track error. This paper discusses the variety of techniques that may be utilized to optimize sensor performance for either near term gain or future reward over a longer time horizon.
Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm.
Tombu, Michael; Seiffert, Adriane E
2011-04-01
People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target-distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking--one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone.
Okoniewska, Barbara; Graham, Alecia; Gavrilova, Marina; Wah, Dannel; Gilgen, Jonathan; Coke, Jason; Burden, Jack; Nayyar, Shikha; Kaunda, Joseph; Yergens, Dean; Baylis, Barry
2012-01-01
Real-time locating systems (RTLS) have the potential to enhance healthcare systems through the live tracking of assets, patients and staff. This study evaluated a commercially available RTLS system deployed in a clinical setting, with three objectives: (1) assessment of the location accuracy of the technology in a clinical setting; (2) assessment of the value of asset tracking to staff; and (3) assessment of threshold monitoring applications developed for patient tracking and inventory control. Simulated daily activities were monitored by RTLS and compared with direct research team observations. Staff surveys and interviews concerning the system's effectiveness and accuracy were also conducted and analyzed. The study showed only modest location accuracy, and mixed reactions in staff interviews. These findings reveal that the technology needs to be refined further for better specific location accuracy before full-scale implementation can be recommended. PMID:22298566
Okoniewska, Barbara; Graham, Alecia; Gavrilova, Marina; Wah, Dannel; Gilgen, Jonathan; Coke, Jason; Burden, Jack; Nayyar, Shikha; Kaunda, Joseph; Yergens, Dean; Baylis, Barry; Ghali, William A
2012-01-01
Real-time locating systems (RTLS) have the potential to enhance healthcare systems through the live tracking of assets, patients and staff. This study evaluated a commercially available RTLS system deployed in a clinical setting, with three objectives: (1) assessment of the location accuracy of the technology in a clinical setting; (2) assessment of the value of asset tracking to staff; and (3) assessment of threshold monitoring applications developed for patient tracking and inventory control. Simulated daily activities were monitored by RTLS and compared with direct research team observations. Staff surveys and interviews concerning the system's effectiveness and accuracy were also conducted and analyzed. The study showed only modest location accuracy, and mixed reactions in staff interviews. These findings reveal that the technology needs to be refined further for better specific location accuracy before full-scale implementation can be recommended.
Player-Tracking Technology: Half-Full or Half-Empty Glass?
Buchheit, Martin; Simpson, Ben Michael
2017-04-01
With the ongoing development of microtechnology, player tracking has become one of the most important components of load monitoring in team sports. The 3 main objectives of player tracking are better understanding of practice (provide an objective, a posteriori evaluation of external load and locomotor demands of any given session or match), optimization of training-load patterns at the team level, and decision making on individual players' training programs to improve performance and prevent injuries (eg, top-up training vs unloading sequences, return to play progression). This paper discusses the basics of a simple tracking approach and the need to integrate multiple systems. The limitations of some of the most used variables in the field (including metabolic-power measures) are debated, and innovative and potentially new powerful variables are presented. The foundations of a successful player-monitoring system are probably laid on the pitch first, in the way practitioners collect their own tracking data, given the limitations of each variable, and how they report and use all this information, rather than in the technology and the variables per se. Overall, the decision to use any tracking technology or new variable should always be considered with a cost/benefit approach (ie, cost, ease of use, portability, manpower/ability to affect the training program).
Object tracking on mobile devices using binary descriptors
NASA Astrophysics Data System (ADS)
Savakis, Andreas; Quraishi, Mohammad Faiz; Minnehan, Breton
2015-03-01
With the growing ubiquity of mobile devices, advanced applications are relying on computer vision techniques to provide novel experiences for users. Currently, few tracking approaches take into consideration the resource constraints on mobile devices. Designing efficient tracking algorithms and optimizing performance for mobile devices can result in better and more efficient tracking for applications, such as augmented reality. In this paper, we use binary descriptors, including Fast Retina Keypoint (FREAK), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Independent Features (BRIEF), and Binary Robust Invariant Scalable Keypoints (BRISK) to obtain real time tracking performance on mobile devices. We consider both Google's Android and Apple's iOS operating systems to implement our tracking approach. The Android implementation is done using Android's Native Development Kit (NDK), which gives the performance benefits of using native code as well as access to legacy libraries. The iOS implementation was created using both the native Objective-C and the C++ programing languages. We also introduce simplified versions of the BRIEF and BRISK descriptors that improve processing speed without compromising tracking accuracy.
Upside-down: Perceived space affects object-based attention.
Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus
2017-07-01
Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
New platform for evaluating ultrasound-guided interventional technologies
NASA Astrophysics Data System (ADS)
Kim, Younsu; Guo, Xiaoyu; Boctor, Emad M.
2016-04-01
Ultrasound-guided needle tracking systems are frequently used in surgical procedures. Various needle tracking technologies have been developed using ultrasound, electromagnetic sensors, and optical sensors. To evaluate these new needle tracking technologies, 3D volume information is often acquired to compute the actual distance from the needle tip to the target object. The image-guidance conditions for comparison are often inconsistent due to the ultrasound beam-thickness. Since 3D volumes are necessary, there is often some time delay between the surgical procedure and the evaluation. These evaluation methods will generally only measure the final needle location because they interrupt the surgical procedure. The main contribution of this work is a new platform for evaluating needle tracking systems in real-time, resolving the problems stated above. We developed new tools to evaluate the precise distance between the needle tip and the target object. A PZT element transmitting unit is designed as needle introducer shape so that it can be inserted in the needle. We have collected time of flight and amplitude information in real-time. We propose two systems to collect ultrasound signals. We demonstrate this platform on an ultrasound DAQ system and a cost-effective FPGA board. The results of a chicken breast experiment show the feasibility of tracking a time series of needle tip distances. We performed validation experiments with a plastisol phantom and have shown that the preliminary data fits a linear regression model with a RMSE of less than 0.6mm. Our platform can be applied to more general needle tracking methods using other forms of guidance.
Autonomous Flight Safety System - Phase III
NASA Technical Reports Server (NTRS)
2008-01-01
The Autonomous Flight Safety System (AFSS) is a joint KSC and Wallops Flight Facility project that uses tracking and attitude data from onboard Global Positioning System (GPS) and inertial measurement unit (IMU) sensors and configurable rule-based algorithms to make flight termination decisions. AFSS objectives are to increase launch capabilities by permitting launches from locations without range safety infrastructure, reduce costs by eliminating some downrange tracking and communication assets, and reduce the reaction time for flight termination decisions.
NASA Astrophysics Data System (ADS)
Gambi, J. M.; García del Pino, M. L.; Gschwindl, J.; Weinmüller, E. B.
2017-12-01
This paper deals with the problem of throwing middle-sized low Earth orbit debris objects into the atmosphere via laser ablation. The post-Newtonian equations here provided allow (hypothetical) space-based acquisition, pointing and tracking systems endowed with very narrow laser beams to reach the pointing accuracy presently prescribed. In fact, whatever the orbital elements of these objects may be, these equations will allow the operators to account for the corrections needed to balance the deviations of the line of sight directions due to the curvature of the paths the laser beams are to travel along. To minimize the respective corrections, the systems will have to perform initial positioning manoeuvres, and the shooting point-ahead angles will have to be adapted in real time. The enclosed numerical experiments suggest that neglecting these measures will cause fatal errors, due to differences in the actual locations of the objects comparable to their size.
Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.
Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin
2018-06-22
Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.
Atmospheric-Fade-Tolerant Tracking and Pointing in Wireless Optical Communication
NASA Technical Reports Server (NTRS)
Ortiz, Gerardo; Lee, Shinhak
2003-01-01
An acquisition, tracking, and pointing (ATP) system, under development at the time of reporting the information for this article, is intended to enable a terminal in a free-space optical communication system to continue to aim its transmitting laser beam toward a receiver at a remote terminal when the laser beacon signal from the remote terminal temporarily fades or drops out of sight altogether. Such fades and dropouts can be caused by adverse atmospheric conditions (e.g., rain or clouds). They can also occur when intervening objects block the line of sight between terminals as a result of motions of those objects or of either or both terminals
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong; Hsiung, Pao-Ann; Wan, Chieh-Hao; Koong, Chorng-Shiuh; Liu, Tang-Kun; Yang, Yuanfan; Lin, Chu-Hsing; Chu, William Cheng-Chung
2009-02-01
A billiard ball tracking system is designed to combine with a visual guide interface to instruct users for a reliable strike. The integrated system runs on a PC platform. The system makes use of a vision system for cue ball, object ball and cue stick tracking. A least-squares error calibration process correlates the real-world and the virtual-world pool ball coordinates for a precise guidance line calculation. Users are able to adjust the cue stick on the pool table according to a visual guidance line instruction displayed on a PC monitor. The ideal visual guidance line extended from the cue ball is calculated based on a collision motion analysis. In addition to calculating the ideal visual guide, the factors influencing selection of the best shot among different object balls and pockets are explored. It is found that a tolerance angle around the ideal line for the object ball to roll into a pocket determines the difficulty of a strike. This angle depends in turn on the distance from the pocket to the object, the distance from the object to the cue ball, and the angle between these two vectors. Simulation results for tolerance angles as a function of these quantities are given. A selected object ball was tested extensively with respect to various geometrical parameters with and without using our integrated system. Players with different proficiency levels were selected for the experiment. The results indicate that all players benefit from our proposed visual guidance system in enhancing their skills, while low-skill players show the maximum enhancement in skill with the help of our system. All exhibit enhanced maximum and average hit-in rates. Experimental results on hit-in rates have shown a pattern consistent with that of the analysis. The hit-in rate is thus tightly connected with the analyzed tolerance angles for sinking object balls into a target pocket. These results prove the efficiency of our system, and the analysis results can be used to attain an efficient game-playing strategy.
NASA Technical Reports Server (NTRS)
Lewis, Steven J.; Palacios, David M.
2013-01-01
This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).
What Are We Tracking ... and Why?
NASA Astrophysics Data System (ADS)
Suarez-Sola, I.; Davey, A.; Hourcle, J. A.
2008-12-01
What Are We Tracking ... and Why? It is impossible to define what adequate provenance is without knowing who is asking the question. What determines sufficient provenance information is not a function of the data, but of the question being asked of it. Many of these questions are asked by people not affiliated with the mission and possibly from different disciplines. To plan for every conceivable question would require a significant burden on the data systems that are designed to answer the mission's science objectives. Provenance is further complicated as each system might have a different definition of 'data set'. Is it the raw instrument results? Is it the result of numerical processing? Does it include the associated metadata? Does it include packaging? Depending on how a system defines 'data set', it may not be able to track provenance with sufficient granularity to ask the desired question, or we may end up with a complex web of relationships that significantly increases the system complexity. System designers must also remember that data archives are not a closed system. We need mechanisms for tracking not only the provenance relationships between data objects and the systems that generate them, but also from journal articles back to the data that was used to support the research. Simply creating a mirror of the data used, as done in other scientific disciplines, is unrealistic for terabyte and petabyte scale data sets. We present work by the Virtual Solar Observatory on the assignment of identifiers that could be used for tracking provenance and compare it to other proposed standards in the scientific and library science communities. We use the Solar Dynamics Observatory, STEREO and Hinode missions as examples where the concept of 'data set' breaks many systems for citing data.
Utku, Semih; Özcanhan, Mehmet Hilal; Unluturk, Mehmet Suleyman
2016-04-01
Patient delivery time is no longer considered as the only critical factor, in ambulatory services. Presently, five clinical performance indicators are used to decide patient satisfaction. Unfortunately, the emergency ambulance services in rapidly growing metropolitan areas do not meet current satisfaction expectations; because of human errors in the management of the objects onboard the ambulances. But, human involvement in the information management of emergency interventions can be reduced by electronic tracking of personnel, assets, consumables and drugs (PACD) carried in the ambulances. Electronic tracking needs the support of automation software, which should be integrated to the overall hospital information system. Our work presents a complete solution based on a centralized database supported by radio frequency identification (RFID) and bluetooth low energy (BLE) identification and tracking technologies. Each object in an ambulance is identified and tracked by the best suited technology. The automated identification and tracking reduces manual paper documentation and frees the personnel to better focus on medical activities. The presence and amounts of the PACD are automatically monitored, warning about their depletion, non-presence or maintenance dates. The computerized two way hospital-ambulance communication link provides information sharing and instantaneous feedback for better and faster diagnosis decisions. A fully implemented system is presented, with detailed hardware and software descriptions. The benefits and the clinical outcomes of the proposed system are discussed, which lead to improved personnel efficiency and more effective interventions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Assessing the performance of a motion tracking system based on optical joint transform correlation
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.
2015-08-01
We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.
A new user-assisted segmentation and tracking technique for an object-based video editing system
NASA Astrophysics Data System (ADS)
Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark
2004-03-01
This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.
Real-time object tracking based on scale-invariant features employing bio-inspired hardware.
Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya
2016-09-01
We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.
Robust object tracking techniques for vision-based 3D motion analysis applications
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.
2016-04-01
Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.
Design of tracking and detecting lens system by diffractive optical method
NASA Astrophysics Data System (ADS)
Yang, Jiang; Qi, Bo; Ren, Ge; Zhou, Jianwei
2016-10-01
Many target-tracking applications require an optical system to acquire the target for tracking and identification. This paper describes a new detecting optical system that can provide automatic flying object detecting, tracking and measuring in visible band. The main feature of the detecting lens system is the combination of diffractive optics with traditional lens design by a technique was invented by Schupmann. Diffractive lens has great potential for developing the larger aperture and lightweight lens. First, the optical system scheme was described. Then the Schupmann achromatic principle with diffractive lens and corrective optics is introduced. According to the technical features and requirements of the optical imaging system for detecting and tracking, we designed a lens system with flat surface Fresnel lens and cancels the optical system chromatic aberration by another flat surface Fresnel lens with effective focal length of 1980mm, an F-Number of F/9.9 and a field of view of 2ωω = 14.2', spatial resolution of 46 lp/mm and a working wavelength range of 0.6 0.85um. At last, the system is compact and easy to fabricate and assembly, the diffuse spot size and MTF function and other analysis provide good performance.
Multiple Drosophila Tracking System with Heading Direction
Sirigrivatanawong, Pudith; Arai, Shogo; Thoma, Vladimiros; Hashimoto, Koichi
2017-01-01
Machine vision systems have been widely used for image analysis, especially that which is beyond human ability. In biology, studies of behavior help scientists to understand the relationship between sensory stimuli and animal responses. This typically requires the analysis and quantification of animal locomotion. In our work, we focus on the analysis of the locomotion of the fruit fly Drosophila melanogaster, a widely used model organism in biological research. Our system consists of two components: fly detection and tracking. Our system provides the ability to extract a group of flies as the objects of concern and furthermore determines the heading direction of each fly. As each fly moves, the system states are refined with a Kalman filter to obtain the optimal estimation. For the tracking step, combining information such as position and heading direction with assignment algorithms gives a successful tracking result. The use of heading direction increases the system efficiency when dealing with identity loss and flies swapping situations. The system can also operate with a variety of videos with different light intensities. PMID:28067800
Self-Motion Impairs Multiple-Object Tracking
ERIC Educational Resources Information Center
Thomas, Laura E.; Seiffert, Adriane E.
2010-01-01
Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement…
Structure preserving clustering-object tracking via subgroup motion pattern segmentation
NASA Astrophysics Data System (ADS)
Fan, Zheyi; Zhu, Yixuan; Jiang, Jiao; Weng, Shuqin; Liu, Zhiwen
2018-01-01
Tracking clustering objects with similar appearances simultaneously in collective scenes is a challenging task in the field of collective motion analysis. Recent work on clustering-object tracking often suffers from poor tracking accuracy and terrible real-time performance due to the neglect or the misjudgment of the motion differences among objects. To address this problem, we propose a subgroup motion pattern segmentation framework based on a multilayer clustering structure and establish spatial constraints only among objects in the same subgroup, which entails having consistent motion direction and close spatial position. In addition, the subgroup segmentation results are updated dynamically because crowd motion patterns are changeable and affected by objects' destinations and scene structures. The spatial structure information combined with the appearance similarity information is used in the structure preserving object tracking framework to track objects. Extensive experiments conducted on several datasets containing multiple real-world crowd scenes validate the accuracy and the robustness of the presented algorithm for tracking objects in collective scenes.
2009-09-23
CAPE CANAVERAL, Fla. – Approaching rain clouds at dawn hover over Central Florida's east coast, effectively causing the scrub of the Space Tracking and Surveillance System - Demonstrator spacecraft from Launch Pad 17-B at Cape Canaveral Air Force Station. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 24. Photo credit: NASA/Jack Pfaller
2009-08-27
CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft arrives on Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller
Video Guidance Sensors Using Remotely Activated Targets
NASA Technical Reports Server (NTRS)
Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.
2004-01-01
Four updated video guidance sensor (VGS) systems have been proposed. As described in a previous NASA Tech Briefs article, a VGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. The VGS provides relative position and attitude (6-DOF) information between the VGS and its target. In the original intended application, the two vehicles would be spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In the first two of the four VGS systems as now proposed, the tracked vehicle would include active targets that would light up on command from the tracking vehicle, and a video camera on the tracking vehicle would be synchronized with, and would acquire images of, the active targets. The video camera would also acquire background images during the periods between target illuminations. The images would be digitized and the background images would be subtracted from the illuminated-target images. Then the position and orientation of the tracked vehicle relative to the tracking vehicle would be computed from the known geometric relationships among the positions of the targets in the image, the positions of the targets relative to each other and to the rest of the tracked vehicle, and the position and orientation of the video camera relative to the rest of the tracking vehicle. The major difference between the first two proposed systems and prior active-target VGS systems lies in the techniques for synchronizing the flashing of the active targets with the digitization and processing of image data. In the prior active-target VGS systems, synchronization was effected, variously, by use of either a wire connection or the Global Positioning System (GPS). In three of the proposed VGS systems, the synchronizing signal would be generated on, and transmitted from, the tracking vehicle. In the first proposed VGS system, the tracking vehicle would transmit a pulse of light. Upon reception of the pulse, circuitry on the tracked vehicle would activate the target lights. During the pulse, the target image acquired by the camera would be digitized. When the pulse was turned off, the target lights would be turned off and the background video image would be digitized. The second proposed system would function similarly to the first proposed system, except that the transmitted synchronizing signal would be a radio pulse instead of a light pulse. In this system, the signal receptor would be a rectifying antenna. If the signal contained sufficient power, the output of the rectifying antenna could be used to activate the target lights, making it unnecessary to include a battery or other power supply for the targets on the tracked vehicle.
Eastern Space and Missile Center (ESMC) Capability.
1983-09-16
Sites Fig. 4 ETR Tracking Itlescopes A unique feature at the ETR is the ability to compute a The Contraves Model 151 includes a TV camera. a widetband...main objective lens. The Contraves wideband transmitter sends video signals from either the main objective TV or the DAGE wide-angle TV system to the...Modified main objective plus the time of day to 0.1 second. to use the ESMC precise 2400 b/s acquisition data system, the Contraves computer system
Image-based tracking: a new emerging standard
NASA Astrophysics Data System (ADS)
Antonisse, Jim; Randall, Scott
2012-06-01
Automated moving object detection and tracking are increasingly viewed as solutions to the enormous data volumes resulting from emerging wide-area persistent surveillance systems. In a previous paper we described a Motion Imagery Standards Board (MISB) initiative to help address this problem: the specification of a micro-architecture for the automatic extraction of motion indicators and tracks. This paper reports on the development of an extended specification of the plug-and-play tracking micro-architecture, on its status as an emerging standard across DoD, the Intelligence Community, and NATO.
Effective real-time vehicle tracking using discriminative sparse coding on local patches
NASA Astrophysics Data System (ADS)
Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei
2016-01-01
A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.
Tracking cotton fiber quality and foreign matter through a stripper harvester
USDA-ARS?s Scientific Manuscript database
The main objective of this project was to track cotton fiber quality and foreign matter content throughout the harvesting units and conveying/cleaning systems on a brush-roll stripper harvester. Seed cotton samples were collected at six locations in 2011 and five in 2012 including: 1) hand-picked fr...
Tracker: Image-Processing and Object-Tracking System Developed
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Theodore W.
1999-01-01
Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.
NASA Astrophysics Data System (ADS)
Scherr, Rachel E.; Harrer, Benedikt W.; Close, Hunter G.; Daane, Abigail R.; DeWater, Lezlie S.; Robertson, Amy D.; Seeley, Lane; Vokos, Stamatis
2016-02-01
Energy is a crosscutting concept in science and features prominently in national science education documents. In the Next Generation Science Standards, the primary conceptual learning goal is for learners to conserve energy as they track the transfers and transformations of energy within, into, or out of the system of interest in complex physical processes. As part of tracking energy transfers among objects, learners should (i) distinguish energy from matter, including recognizing that energy flow does not uniformly align with the movement of matter, and should (ii) identify specific mechanisms by which energy is transferred among objects, such as mechanical work and thermal conduction. As part of tracking energy transformations within objects, learners should (iii) associate specific forms with specific models and indicators (e.g., kinetic energy with speed and/or coordinated motion of molecules, thermal energy with random molecular motion and/or temperature) and (iv) identify specific mechanisms by which energy is converted from one form to another, such as incandescence and metabolism. Eventually, we may hope for learners to be able to optimize systems to maximize some energy transfers and transformations and minimize others, subject to constraints based in both imputed mechanism (e.g., objects must have motion energy in order for gravitational energy to change) and the second law of thermodynamics (e.g., heating is irreversible). We hypothesize that a subsequent goal of energy learning—innovating to meet socially relevant needs—depends crucially on the extent to which these goals have been met.
NASA Astrophysics Data System (ADS)
Hahn, Matthias; Pätzold, Martin; Andert, Tom; Bird, Michael K.; Tyler, Leonard G.; Linscott, Ivan; Hinson, Dave P.; Stern, Alan; Weaver, Hal; Olkin, Cathrin; Young, Leslie; Ennico, Kimberly
2015-11-01
One objective of the New Horizons Radio Science Experiment REX is the determination of the system mass and the individual masses of Pluto and Charon. About four weeks of two-way radio tracking centered around the closest approach of New Horizons to the Pluto system were processed. Major problems during the processing were caused by the small net forces of the spacecraft thruster activity, which produce extra Δv on the spacecraft motion superposed onto the continuously perturbed motion caused by the attracting forces of the Pluto system. The times of spacecraft thruster activity are known but the applied Δv needs to be specifically adjusted. No two-way tracking was available for the day of the flyby, but slots of REX one-way uplink tracking are used to cover the most important times near closest approach, e.g. during occultation entries and exits. This will help to separate the individual masses of Pluto and Charon from the system mass.
2009-09-23
CAPE CANAVERAL, Fla. – The mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station has been rolled back as the countdown proceeds to launch of the United Launch Alliance Delta II rocket with the Space Tracking and Surveillance System - Demonstrator spacecraft aboard. It is being launched by NASA for the Missile Defense System. The hour-long launch window opens at 8 a.m. EDT today. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Dimitri Gerondidakis
Lloréns, Roberto; Noé, Enrique; Naranjo, Valery; Borrego, Adrián; Latorre, Jorge; Alcañiz, Mariano
2015-01-01
Motion tracking systems are commonly used in virtual reality-based interventions to detect movements in the real world and transfer them to the virtual environment. There are different tracking solutions based on different physical principles, which mainly define their performance parameters. However, special requirements have to be considered for rehabilitation purposes. This paper studies and compares the accuracy and jitter of three tracking solutions (optical, electromagnetic, and skeleton tracking) in a practical scenario and analyzes the subjective perceptions of 19 healthy subjects, 22 stroke survivors, and 14 physical therapists. The optical tracking system provided the best accuracy (1.074 ± 0.417 cm) while the electromagnetic device provided the most inaccurate results (11.027 ± 2.364 cm). However, this tracking solution provided the best jitter values (0.324 ± 0.093 cm), in contrast to the skeleton tracking, which had the worst results (1.522 ± 0.858 cm). Healthy individuals and professionals preferred the skeleton tracking solution rather than the optical and electromagnetic solution (in that order). Individuals with stroke chose the optical solution over the other options. Our results show that subjective perceptions and preferences are far from being constant among different populations, thus suggesting that these considerations, together with the performance parameters, should be also taken into account when designing a rehabilitation system. PMID:25808765
NASA Technical Reports Server (NTRS)
Porter, D. W.; Lefler, R. M.
1979-01-01
A generalized hypothesis testing approach is applied to the problem of tracking several objects where several different associations of data with objects are possible. Such problems occur, for instance, when attempting to distinctly track several aircraft maneuvering near each other or when tracking ships at sea. Conceptually, the problem is solved by first, associating data with objects in a statistically reasonable fashion and then, tracking with a bank of Kalman filters. The objects are assumed to have motion characterized by a fixed but unknown deterministic portion plus a random process portion modeled by a shaping filter. For example, the object might be assumed to have a mean straight line path about which it maneuvers in a random manner. Several hypothesized associations of data with objects are possible because of ambiguity as to which object the data comes from, false alarm/detection errors, and possible uncertainty in the number of objects being tracked. The statistical likelihood function is computed for each possible hypothesized association of data with objects. Then the generalized likelihood is computed by maximizing the likelihood over parameters that define the deterministic motion of the object.
Tracking of multiple targets using online learning for reference model adaptation.
Pernkopf, Franz
2008-12-01
Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.
Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm
Tombu, Michael
2014-01-01
People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target–distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking—one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone. PMID:21264704
A Computational Model of Spatial Development
NASA Astrophysics Data System (ADS)
Hiraki, Kazuo; Sashima, Akio; Phillips, Steven
Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.
Architectural Design for European SST System
NASA Astrophysics Data System (ADS)
Utzmann, Jens; Wagner, Axel; Blanchet, Guillaume; Assemat, Francois; Vial, Sophie; Dehecq, Bernard; Fernandez Sanchez, Jaime; Garcia Espinosa, Jose Ramon; Agueda Mate, Alberto; Bartsch, Guido; Schildknecht, Thomas; Lindman, Niklas; Fletcher, Emmet; Martin, Luis; Moulin, Serge
2013-08-01
The paper presents the results of a detailed design, evaluation and trade-off of a potential European Space Surveillance and Tracking (SST) system architecture. The results have been produced in study phase 1 of the on-going "CO-II SSA Architectural Design" project performed by the Astrium consortium as part of ESA's Space Situational Awareness Programme and are the baseline for further detailing and consolidation in study phase 2. The sensor network is comprised of both ground- and space-based assets and aims at being fully compliant with the ESA SST System Requirements. The proposed ground sensors include a surveillance radar, an optical surveillance system and a tracking network (radar and optical). A space-based telescope system provides significant performance and robustness for the surveillance and tracking of beyond-LEO target objects.
Nearly automatic motion capture system for tracking octopus arm movements in 3D space.
Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar
2009-08-30
Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.
A review of vision-based motion analysis in sport.
Barris, Sian; Button, Chris
2008-01-01
Efforts at player motion tracking have traditionally involved a range of data collection techniques from live observation to post-event video analysis where player movement patterns are manually recorded and categorized to determine performance effectiveness. Due to the considerable time required to manually collect and analyse such data, research has tended to focus only on small numbers of players within predefined playing areas. Whilst notational analysis is a convenient, practical and typically inexpensive technique, the validity and reliability of the process can vary depending on a number of factors, including how many observers are used, their experience, and the quality of their viewing perspective. Undoubtedly the application of automated tracking technology to team sports has been hampered because of inadequate video and computational facilities available at sports venues. However, the complex nature of movement inherent to many physical activities also represents a significant hurdle to overcome. Athletes tend to exhibit quick and agile movements, with many unpredictable changes in direction and also frequent collisions with other players. Each of these characteristics of player behaviour violate the assumptions of smooth movement on which computer tracking algorithms are typically based. Systems such as TRAKUS, SoccerMan, TRAKPERFORMANCE, Pfinder and Prozone all provide extrinsic feedback information to coaches and athletes. However, commercial tracking systems still require a fair amount of operator intervention to process the data after capture and are often limited by the restricted capture environments that can be used and the necessity for individuals to wear tracking devices. Whilst some online tracking systems alleviate the requirements of manual tracking, to our knowledge a completely automated system suitable for sports performance is not yet commercially available. Automatic motion tracking has been used successfully in other domains outside of elite sport performance, notably for surveillance in the military and security industry where automatic recognition of moving objects is achievable because identification of the objects is not necessary. The current challenge is to obtain appropriate video sequences that can robustly identify and label people over time, in a cluttered environment containing multiple interacting people. This problem is often compounded by the quality of video capture, the relative size and occlusion frequency of people, and also changes in illumination. Potential applications of an automated motion detection system are offered, such as: planning tactics and strategies; measuring team organisation; providing meaningful kinematic feedback; and objective measures of intervention effectiveness in team sports, which could benefit coaches, players, and sports scientists.
Howe, Piers D. L.
2017-01-01
To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources. PMID:28410383
Lapierre, Mark D; Cropper, Simon J; Howe, Piers D L
2017-01-01
To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources.
Multiple-object tracking while driving: the multiple-vehicle tracking task.
Lochner, Martin J; Trick, Lana M
2014-11-01
Many contend that driving an automobile involves multiple-object tracking. At this point, no one has tested this idea, and it is unclear how multiple-object tracking would coordinate with the other activities involved in driving. To address some of the initial and most basic questions about multiple-object tracking while driving, we modified the tracking task for use in a driving simulator, creating the multiple-vehicle tracking task. In Experiment 1, we employed a dual-task methodology to determine whether there was interference between tracking and driving. Findings suggest that although it is possible to track multiple vehicles while driving, driving reduces tracking performance, and tracking compromises headway and lane position maintenance while driving. Modified change-detection paradigms were used to assess whether there were change localization advantages for tracked targets in multiple-vehicle tracking. When changes occurred during a blanking interval, drivers were more accurate (Experiment 2a) and ~250 ms faster (Experiment 2b) at locating the vehicle that changed when it was a target rather than a distractor in tracking. In a more realistic driving task where drivers had to brake in response to the sudden onset of brake lights in one of the lead vehicles, drivers were more accurate at localizing the vehicle that braked if it was a tracking target, although there was no advantage in terms of braking response time. Overall, results suggest that multiple-object tracking is possible while driving and perhaps even advantageous in some situations, but further research is required to determine whether multiple-object tracking is actually used in day-to-day driving.
A Programmer-Oriented Approach to Safe Concurrency
2003-05-01
and leaving a synchronized block additionally has effects on the management of memory values in the JMM. The practical outcome of these effects is...object-oriented effects system; (3) analysis to track the association of locks with regions, (4) policy descriptions for allowable method...Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4 An Object-Oriented Effects System 45 4.1 Regions Identify State
Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.
Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong
2016-08-01
The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.
The Kinect as an interventional tracking system
NASA Astrophysics Data System (ADS)
Wang, Xiang L.; Stolka, Philipp J.; Boctor, Emad; Hager, Gregory; Choti, Michael
2012-02-01
This work explores the suitability of low-cost sensors for "serious" medical applications, such as tracking of interventional tools in the OR, for simulation, and for education. Although such tracking - i.e. the acquisition of pose data e.g. for ultrasound probes, tissue manipulation tools, needles, but also tissue, bone etc. - is well established, it relies mostly on external devices such as optical or electromagnetic trackers, both of which mandate the use of special markers or sensors attached to each single entity whose pose is to be recorded, and also require their calibration to the tracked entity, i.e. the determination of the geometric relationship between the marker's and the object's intrinsic coordinate frames. The Microsoft Kinect sensor is a recently introduced device for full-body tracking in the gaming market, but it was quickly hacked - due to its wide range of tightly integrated sensors (RGB camera, IR depth and greyscale camera, microphones, accelerometers, and basic actuation) - and used beyond this area. As its field of view and its accuracy are within reasonable usability limits, we describe a medical needle-tracking system for interventional applications based on the Kinect sensor, standard biopsy needles, and no necessary attachments, thus saving both cost and time. Its twin cameras are used as a stereo pair to detect needle-shaped objects, reconstruct their pose in four degrees of freedom, and provide information about the most likely candidate.
Weighted feature selection criteria for visual servoing of a telerobot
NASA Technical Reports Server (NTRS)
Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.
1989-01-01
Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.
Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.
Dzyubak, Oleksandr P; Ritman, Erik L
2011-01-01
The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.
NASA Astrophysics Data System (ADS)
Nafis, Christopher; Jensen, Vern; von Jako, Ron
2008-03-01
Electromagnetic (EM) tracking systems have been successfully used for Surgical Navigation in ENT, cranial, and spine applications for several years. Catheter sized micro EM sensors have also been used in tightly controlled cardiac mapping and pulmonary applications. EM systems have the benefit over optical navigation systems of not requiring a line-of-sight between devices. Ferrous metals or conductive materials that are transient within the EM working volume may impact tracking performance. Effective methods for detecting and reporting EM field distortions are generally well known. Distortion compensation can be achieved for objects that have a static spatial relationship to a tracking sensor. New commercially available micro EM tracking systems offer opportunities for expanded image-guided navigation procedures. It is important to know and understand how well these systems perform with different surgical tables and ancillary equipment. By their design and intended use, micro EM sensors will be located at the distal tip of tracked devices and therefore be in closer proximity to the tables. Our goal was to define a simple and portable process that could be used to estimate the EM tracker accuracy, and to vet a large number of popular general surgery and imaging tables that are used in the United States and abroad.
A digital video tracking system
NASA Astrophysics Data System (ADS)
Giles, M. K.
1980-01-01
The Real-Time Videotheodolite (RTV) was developed in connection with the requirement to replace film as a recording medium to obtain the real-time location of an object in the field-of-view (FOV) of a long focal length theodolite. Design philosophy called for a system capable of discriminatory judgment in identifying the object to be tracked with 60 independent observations per second, capable of locating the center of mass of the object projection on the image plane within about 2% of the FOV in rapidly changing background/foreground situations, and able to generate a predicted observation angle for the next observation. A description is given of a number of subsystems of the RTV, taking into account the processor configuration, the video processor, the projection processor, the tracker processor, the control processor, and the optics interface and imaging subsystem.
Chen, Pang-Chia
2013-01-01
This paper investigates multi-objective controller design approaches for nonlinear boiler-turbine dynamics subject to actuator magnitude and rate constraints. System nonlinearity is handled by a suitable linear parameter varying system representation with drum pressure as the system varying parameter. Variation of the drum pressure is represented by suitable norm-bounded uncertainty and affine dependence on system matrices. Based on linear matrix inequality algorithms, the magnitude and rate constraints on the actuator and the deviations of fluid density and water level are formulated while the tracking abilities on the drum pressure and power output are optimized. Variation ranges of drum pressure and magnitude tracking commands are used as controller design parameters, determined according to the boiler-turbine's operation range. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Learned filters for object detection in multi-object visual tracking
NASA Astrophysics Data System (ADS)
Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David
2016-05-01
We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
Tracking Objects with Networked Scattered Directional Sensors
NASA Astrophysics Data System (ADS)
Plarre, Kurt; Kumar, P. R.
2007-12-01
We study the problem of object tracking using highly directional sensors—sensors whose field of vision is a line or a line segment. A network of such sensors monitors a certain region of the plane. Sporadically, objects moving in straight lines and at a constant speed cross the region. A sensor detects an object when it crosses its line of sight, and records the time of the detection. No distance or angle measurements are available. The task of the sensors is to estimate the directions and speeds of the objects, and the sensor lines, which are unknown a priori. This estimation problem involves the minimization of a highly nonconvex cost function. To overcome this difficulty, we introduce an algorithm, which we call "adaptive basis algorithm." This algorithm is divided into three phases: in the first phase, the algorithm is initialized using data from six sensors and four objects; in the second phase, the estimates are updated as data from more sensors and objects are incorporated. The third phase is an optional coordinated transformation. The estimation is done in an "ad-hoc" coordinate system, which we call "adaptive coordinate system." When more information is available, for example, the location of six sensors, the estimates can be transformed to the "real-world" coordinate system. This constitutes the third phase.
The Mesa Arizona Pupil Tracking System
NASA Technical Reports Server (NTRS)
Wright, D. L.
1973-01-01
A computer-based Pupil Tracking/Teacher Monitoring System was designed for Mesa Public Schools, Mesa, Arizona. The established objectives of the system were to: (1) facilitate the economical collection and storage of student performance data necessary to objectively evaluate the relative effectiveness of teachers, instructional methods, materials, and applied concepts; and (2) identify, on a daily basis, those students requiring special attention in specific subject areas. The system encompasses computer hardware/software and integrated curricula progression/administration devices. It provides daily evaluation and monitoring of performance as students progress at class or individualized rates. In the process, it notifies the student and collects information necessary to validate or invalidate subject presentation devices, methods, materials, and measurement devices in terms of direct benefit to the students. The system utilizes a small-scale computer (e.g., IBM 1130) to assure low-cost replicability, and may be used for many subjects of instruction.
NASA Astrophysics Data System (ADS)
Gao, Haibo; Chen, Chao; Ding, Liang; Li, Weihua; Yu, Haitao; Xia, Kerui; Liu, Zhen
2017-11-01
Wheeled mobile robots (WMRs) often suffer from the longitudinal slipping when moving on the loose soil of the surface of the moon during exploration. Longitudinal slip is the main cause of WMRs' delay in trajectory tracking. In this paper, a nonlinear extended state observer (NESO) is introduced to estimate the longitudinal velocity in order to estimate the slip ratio and the derivative of the loss of velocity which are used in modelled disturbance compensation. Owing to the uncertainty and disturbance caused by estimation errors, a multi-objective controller using the mixed H2/H∞ method is employed to ensure the robust stability and performance of the WMR system. The final inputs of the trajectory tracking consist of the feedforward compensation, compensation for the modelled disturbances and designed multi-objective control inputs. Finally, the simulation results demonstrate the effectiveness of the controller, which exhibits a satisfactory tracking performance.
The what-where trade-off in multiple-identity tracking.
Cohen, Michael A; Pinto, Yair; Howe, Piers D L; Horowitz, Todd S
2011-07-01
Observers are poor at reporting the identities of objects that they have successfully tracked (Pylyshyn, Visual Cognition, 11, 801-822, 2004; Scholl & Pylyshyn, Cognitive Psychology, 38, 259-290, 1999). Consequently, it has been claimed that objects are tracked in a manner that does not encode their identities (Pylyshyn, 2004). Here, we present evidence that disputes this claim. In a series of experiments, we show that attempting to track the identities of objects can decrease an observer's ability to track the objects' locations. This indicates that the mechanisms that track, respectively, the locations and identities of objects draw upon a common resource. Furthermore, we show that this common resource can be voluntarily distributed between the two mechanisms. This is clear evidence that the location- and identity-tracking mechanisms are not entirely dissociable.
Visual tracking using neuromorphic asynchronous event-based cameras.
Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad
2015-04-01
This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Chen, Tao
2018-06-01
To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.
Efficient integration of spectral features for vehicle tracking utilizing an adaptive sensor
NASA Astrophysics Data System (ADS)
Uzkent, Burak; Hoffman, Matthew J.; Vodacek, Anthony
2015-03-01
Object tracking in urban environments is an important and challenging problem that is traditionally tackled using visible and near infrared wavelengths. By inserting extended data such as spectral features of the objects one can improve the reliability of the identification process. However, huge increase in data created by hyperspectral imaging is usually prohibitive. To overcome the complexity problem, we propose a persistent air-to-ground target tracking system inspired by a state-of-the-art, adaptive, multi-modal sensor. The adaptive sensor is capable of providing panchromatic images as well as the spectra of desired pixels. This addresses the data challenge of hyperspectral tracking by only recording spectral data as needed. Spectral likelihoods are integrated into a data association algorithm in a Bayesian fashion to minimize the likelihood of misidentification. A framework for controlling spectral data collection is developed by incorporating motion segmentation information and prior information from a Gaussian Sum filter (GSF) movement predictions from a multi-model forecasting set. An intersection mask of the surveillance area is extracted from OpenStreetMap source and incorporated into the tracking algorithm to perform online refinement of multiple model set. The proposed system is tested using challenging and realistic scenarios generated in an adverse environment.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-02-21
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-01-01
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
A stochastic optimal feedforward and feedback control methodology for superagility
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Direskeneli, Haldun; Taylor, Deborah B.
1992-01-01
A new control design methodology is developed: Stochastic Optimal Feedforward and Feedback Technology (SOFFT). Traditional design techniques optimize a single cost function (which expresses the design objectives) to obtain both the feedforward and feedback control laws. This approach places conflicting demands on the control law such as fast tracking versus noise atttenuation/disturbance rejection. In the SOFFT approach, two cost functions are defined. The feedforward control law is designed to optimize one cost function, the feedback optimizes the other. By separating the design objectives and decoupling the feedforward and feedback design processes, both objectives can be achieved fully. A new measure of command tracking performance, Z-plots, is also developed. By analyzing these plots at off-nominal conditions, the sensitivity or robustness of the system in tracking commands can be predicted. Z-plots provide an important tool for designing robust control systems. The Variable-Gain SOFFT methodology was used to design a flight control system for the F/A-18 aircraft. It is shown that SOFFT can be used to expand the operating regime and provide greater performance (flying/handling qualities) throughout the extended flight regime. This work was performed under the NASA SBIR program. ICS plans to market the software developed as a new module in its commercial CACSD software package: ACET.
Performance Analysis of Sensor Systems for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Choi, Eun-Jung; Cho, Sungki; Jo, Jung Hyun; Park, Jang-Hyun; Chung, Taejin; Park, Jaewoo; Jeon, Hocheol; Yun, Ami; Lee, Yonghui
2017-12-01
With increased human activity in space, the risk of re-entry and collision between space objects is constantly increasing. Hence, the need for space situational awareness (SSA) programs has been acknowledged by many experienced space agencies. Optical and radar sensors, which enable the surveillance and tracking of space objects, are the most important technical components of SSA systems. In particular, combinations of radar systems and optical sensor networks play an outstanding role in SSA programs. At present, Korea operates the optical wide field patrol network (OWL-Net), the only optical system for tracking space objects. However, due to their dependence on weather conditions and observation time, it is not reasonable to use optical systems alone for SSA initiatives, as they have limited operational availability. Therefore, the strategies for developing radar systems should be considered for an efficient SSA system using currently available technology. The purpose of this paper is to analyze the performance of a radar system in detecting and tracking space objects. With the radar system investigated, the minimum sensitivity is defined as detection of a 1-m2 radar cross section (RCS) at an altitude of 2,000 km, with operating frequencies in the L, S, C, X or Ku-band. The results of power budget analysis showed that the maximum detection range of 2,000 km, which includes the low earth orbit (LEO) environment, can be achieved with a transmission power of 900 kW, transmit and receive antenna gains of 40 dB and 43 dB, respectively, a pulse width of 2 ms, and a signal processing gain of 13.3 dB, at a frequency of 1.3 GHz. We defined the key parameters of the radar following a performance analysis of the system. This research can thus provide guidelines for the conceptual design of radar systems for national SSA initiatives.
Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Srihasam, Krishna; Bullock, Daniel; Grossberg, Stephen
2009-01-01
Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do…
User Identification and Tracking in an Educational Web Environment.
ERIC Educational Resources Information Center
Marzo-Lazaro, J. L.; Verdu-Carbo, T.; Fabregat-Gesa, R.
This paper describes a solution to the user identification and tracking problem within an educational World Wide Web environment. The paper begins with an overview of the Teaching Support System project at the University of Girona (Spain); the main objective of the project is to create an integrated set of tools for teachers to use to create and…
Romero, Veronica; Amaral, Joseph; Fitzpatrick, Paula; Schmidt, R C; Duncan, Amie W; Richardson, Michael J
2017-04-01
Functionally stable and robust interpersonal motor coordination has been found to play an integral role in the effectiveness of social interactions. However, the motion-tracking equipment required to record and objectively measure the dynamic limb and body movements during social interaction has been very costly, cumbersome, and impractical within a non-clinical or non-laboratory setting. Here we examined whether three low-cost motion-tracking options (Microsoft Kinect skeletal tracking of either one limb or whole body and a video-based pixel change method) can be employed to investigate social motor coordination. Of particular interest was the degree to which these low-cost methods of motion tracking could be used to capture and index the coordination dynamics that occurred between a child and an experimenter for three simple social motor coordination tasks in comparison to a more expensive, laboratory-grade motion-tracking system (i.e., a Polhemus Latus system). Overall, the results demonstrated that these low-cost systems cannot substitute the Polhemus system in some tasks. However, the lower-cost Microsoft Kinect skeletal tracking and video pixel change methods were successfully able to index differences in social motor coordination in tasks that involved larger-scale, naturalistic whole body movements, which can be cumbersome and expensive to record with a Polhemus. However, we found the Kinect to be particularly vulnerable to occlusion and the pixel change method to movements that cross the video frame midline. Therefore, particular care needs to be taken in choosing the motion-tracking system that is best suited for the particular research.
Long-term scale adaptive tracking with kernel correlation filters
NASA Astrophysics Data System (ADS)
Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui
2018-04-01
Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.
Shuttle communication and tracking systems signal design and interface compatibility analysis
NASA Technical Reports Server (NTRS)
1986-01-01
Various options for the Dedicated Payload Communication Link (DPCL) were evaluated. Specific subjects addressed include: payload to DPCL power transfer in the proximity of the payload, DPCL antenna pointing considerations, and DPCL transceiver implementations which can be mounted on the deployed antenna boom. Additional analysis of the Space Telescope performance was conducted. The feasibility of using the Global Positioning System (GPS) for attitude determination and control for large spacecraft was examined. The objective of the Shuttle Orbiter Radar Test and Evaluation (SORTE) program was to quantify the Ku-band radar tracking accuracy using White Sands Missile Range (WSMR) radar and optical tracking equipment, with helicopter and balloon targets.
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket carrying the Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft leaps into the sky from Launch Pad 17-B at Cape Canaveral Air Force Station. STSS-Demo was launched at 8:20:22 a.m. EDT by NASA for the U.S. Missile Defense Agency. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Sandra Joseph- Kevin O'Connell
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket with Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft leaps from Launch Pad 17-B at Cape Canaveral Air Force Station amid clouds of smoke. STSS-Demo was launched at 8:20:22 a.m. EDT by NASA for the U.S. Missile Defense Agency. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Sandra Joseph- Kevin O'Connell
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket with Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft aboard races into the sky leaving a trail of fire and smoke after liftoff from Launch Pad 17-B at Cape Canaveral Air Force Station. It was launched by NASA for the U.S. Missile Defense Agency at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Alan Ault
2009-08-27
CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft leaves the Astrotech payload processing facility on its way to Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller
2009-09-25
CAPE CANAVERAL, Fla. –The United Launch Alliance Delta II rocket with Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft leaps from Launch Pad 17-B at Cape Canaveral Air Force Station amid clouds of smoke. STSS-Demo was launched at 8:20:22 a.m. EDT by NASA for the U.S. Missile Defense Agency. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Tony Gray-Tim Powers
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket carrying the Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft rises from a mantle of smoke as it lifts off from Launch Pad 17-B at Cape Canaveral Air Force Station. STSS-Demo was launched at 8:20:22 a.m. EDT by NASA for the U.S. Missile Defense Agency. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Sandra Joseph- Kevin O'Connell
2009-09-25
CAPE CANAVERAL, Fla. – The Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft lifts off through a cloud of smoke from Launch Pad 17-B at Cape Canaveral Air Force Station aboard a United Launch Alliance Delta II rocket. It was launched by NASA for the U.S. Missile Defense Agency. Launch was at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Alan Ault
2009-09-25
CAPE CANAVERAL, Fla. – Under a cloud-streaked sky, the Space Tracking and Surveillance System – Demonstrator, or STSS-Demo, waits through the countdown to liftoff Launch Pad 17-B at Cape Canaveral Air Force Station aboard a United Launch Alliance Delta II rocket. STSS-Demo is being launched by NASA for the U.S. Missile Defense Agency. Liftoff is at 8:20 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Jack Pfaller
2009-09-25
CAPE CANAVERAL, Fla. – Under a cloud-streaked sky, the Space Tracking and Surveillance System – Demonstrator, or STSS-Demo, waits through the countdown to liftoff Launch Pad 17-B at Cape Canaveral Air Force Station aboard a United Launch Alliance Delta II rocket. STSS-Demo is being launched by NASA for the U.S. Missile Defense Agency. Liftoff was at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Jack Pfaller
2009-09-23
CAPE CANAVERAL, Fla. – On Launch Pad 17-B at Cape Canaveral Air Force Station in Florida, the Space Tracking and Surveillance System - Demonstrator spacecraft is bathed in light under a dark, cloudy sky. Rain over Central Florida's east coast caused the scrub of the launch. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 24. Photo credit: NASA/Jack Pfaller
2009-08-27
CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft is being lifted into the mobile service tower on Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller
2009-09-25
CAPE CANAVERAL, Fla. – The Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft lifts off through a cloud of smoke from Launch Pad 17-B at Cape Canaveral Air Force Station aboard a United Launch Alliance Delta II rocket. It was launched by NASA for the U.S. Missile Defense Agency. Launch was at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Jack Pfaller
2009-09-23
CAPE CANAVERAL, Fla. – On Launch Pad 17-B at Cape Canaveral Air Force Station in Florida, the Space Tracking and Surveillance System Demonstrator spacecraft waits for launch under dark, cloudy sky. Rain over Central Florida's east coast caused the scrub of the launch. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 24. Photo credit: NASA/Jack Pfaller
Cameli, Matteo; Ciccone, Marco M; Maiello, Maria; Modesti, Pietro A; Muiesan, Maria L; Scicchitano, Pietro; Novo, Salvatore; Palmiero, Pasquale; Saba, Pier S; Pedrinelli, Roberto
2016-05-01
Speckle tracking echocardiography (STE) is an imaging technique applied to the analysis of left atrial function. STE provides a non-Doppler, angle-independent and objective quantification of left atrial myocardial deformation. Data regarding feasibility, accuracy and clinical applications of left atrial strain are rapidly gathering. This review describes the fundamental concepts of left atrial STE, illustrates its pathophysiological background and discusses its emerging role in systemic arterial hypertension.
Ubiquitous Indoor Geolocation: a Case Study of Jewellery Management System
NASA Astrophysics Data System (ADS)
Nikparvar, B.; Sadeghi-Niaraki, A.; Azari, P.
2014-10-01
Addressing and geolocation for indoor environments are important fields of research in the recent years. The problem of finding location of objects in indoor spaces is proposed to solve in two ways. The first, is to assign coordinates to objects and second is to divide space into cells and detect the presence or absence of objects in each cell to track them. In this paper the second approach is discussed by using Radio Frequency Identification technology to identify and track high value objects in jewellery retail industry. In Ubiquitous Sensor Networks, the reactivity or proactivity of the environment are important issues. Reactive environments wait for a request to response to it. Instead, in proactive spaces, the environment acts in advance to deal with an expected action. In this research, a geo-sensor network containing RFID readers, tags, and antennas which continuously exchange radio frequency signal streams is proposed to manage and monitor jewellery galleries ubiquitously. The system is also equipped with a GIS representation which provides a more user-friendly system to manage a jewellery gallery.
Integrated Eye Tracking and Neural Monitoring for Enhanced Assessment of Mild TBI
2016-04-01
and we anticipate the initiation of the neuroimaging portion of the study early in Year 3. The fMRI task has been completed and is in beta testing...neurocognitive test battery, and self-report measures of cognitive efficacy. We will also include functional magnetic resonance imagining ( fMRI ) and... fMRI and DTI will provide an objective basis for cross-validating the EEG and eye tracking system. Both the EEG and eye tracking data will be
NASA Technical Reports Server (NTRS)
Rickman, Doug; Shire, J.; Qualters, J.; Mitchell, K.; Pollard, S.; Rao, R.; Kajumba, N.; Quattrochi, D.; Estes, M., Jr.; Meyer, P.;
2009-01-01
Objectives. To provide an overview of four environmental public health surveillance projects developed by CDC and its partners for the Health and Environment Linked for Information Exchange, Atlanta (HELIX-Atlanta) and to illustrate common issues and challenges encountered in developing an environmental public health tracking system. Methods. HELIX-Atlanta, initiated in October 2003 to develop data linkage and analysis methods that can be used by the National Environmental Public Health Tracking Network (Tracking Network), conducted four projects. We highlight the projects' work, assess attainment of the HELIX-Atlanta goals and discuss three surveillance attributes. Results. Among the major challenges was the complexity of analytic issues which required multidiscipline teams with technical expertise. This expertise and the data resided across multiple organizations. Conclusions:Establishing formal procedures for sharing data, defining data analysis standards and automating analyses, and committing staff with appropriate expertise is needed to support wide implementation of environmental public health tracking.
Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
Tian, Yuan; Guan, Tao; Wang, Cheng
2010-01-01
To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278
Registration of 3D and Multispectral Data for the Study of Cultural Heritage Surfaces
Chane, Camille Simon; Schütze, Rainer; Boochs, Frank; Marzani, Franck S.
2013-01-01
We present a technique for the multi-sensor registration of featureless datasets based on the photogrammetric tracking of the acquisition systems in use. This method is developed for the in situ study of cultural heritage objects and is tested by digitizing a small canvas successively with a 3D digitization system and a multispectral camera while simultaneously tracking the acquisition systems with four cameras and using a cubic target frame with a side length of 500 mm. The achieved tracking accuracy is better than 0.03 mm spatially and 0.150 mrad angularly. This allows us to seamlessly register the 3D acquisitions and to project the multispectral acquisitions on the 3D model. PMID:23322103
Partial camera automation in an unmanned air vehicle.
Korteling, J E; van der Borg, W
1997-03-01
The present study focused on an intelligent, semiautonomous, interface for a camera operator of a simulated unmanned air vehicle (UAV). This interface used system "knowledge" concerning UAV motion in order to assist a camera operator in tracking an object moving through the landscape below. The semiautomated system compensated for the translations of the UAV relative to the earth. This compensation was accompanied by the appropriate joystick movements ensuring tactile (haptic) feedback of these system interventions. The operator had to superimpose self-initiated joystick manipulations over these system-initiated joystick motions in order to track the motion of a target (a driving truck) relative to the terrain. Tracking data showed that subjects performed substantially better with the active system. Apparently, the subjects had no difficulty in maintaining control, i.e., "following" the active stick while superimposing self-initiated control movements over the system-interventions. Furthermore, tracking performance with an active interface was clearly superior relative to the passive system. The magnitude of this effect was equal to the effect of update-frequency (2-5 Hz) of the monitor image. The benefits of update frequency enhancement and semiautomated tracking were the greatest under difficult steering conditions. Mental workload scores indicated that, for the difficult tracking-dynamics condition, both semiautomation and update frequency increase resulted in less experienced mental effort. For the easier dynamics this effect was only seen for update frequency.
Verification hybrid control of a wheeled mobile robot and manipulator
NASA Astrophysics Data System (ADS)
Muszynska, Magdalena; Burghardt, Andrzej; Kurc, Krzysztof; Szybicki, Dariusz
2016-04-01
In this article, innovative approaches to realization of the wheeled mobile robots and manipulator tracking are presented. Conceptions include application of the neural-fuzzy systems to compensation of the controlled system's nonlinearities in the tracking control task. Proposed control algorithms work on-line, contain structure, that adapt to the changeable work conditions of the controlled systems, and do not require the preliminary learning. The algorithm was verification on the real object which was a Scorbot - ER 4pc robotic manipulator and a Pioneer - 2DX mobile robot.
Target tracking and surveillance by fusing stereo and RFID information
NASA Astrophysics Data System (ADS)
Raza, Rana H.; Stockman, George C.
2012-06-01
Ensuring security in high risk areas such as an airport is an important but complex problem. Effectively tracking personnel, containers, and machines is a crucial task. Moreover, security and safety require understanding the interaction of persons and objects. Computer vision (CV) has been a classic tool; however, variable lighting, imaging, and random occlusions present difficulties for real-time surveillance, resulting in erroneous object detection and trajectories. Determining object ID via CV at any instance of time in a crowded area is computationally prohibitive, yet the trajectories of personnel and objects should be known in real time. Radio Frequency Identification (RFID) can be used to reliably identify target objects and can even locate targets at coarse spatial resolution, while CV provides fuzzy features for target ID at finer resolution. Our research demonstrates benefits obtained when most objects are "cooperative" by being RFID tagged. Fusion provides a method to simplify the correspondence problem in 3D space. A surveillance system can query for unique object ID as well as tag ID information, such as target height, texture, shape and color, which can greatly enhance scene analysis. We extend geometry-based tracking so that intermittent information on ID and location can be used in determining a set of trajectories of N targets over T time steps. We show that partial-targetinformation obtained through RFID can reduce computation time (by 99.9% in some cases) and also increase the likelihood of producing correct trajectories. We conclude that real-time decision-making should be possible if the surveillance system can integrate information effectively between the sensor level and activity understanding level.
Studying visual attention using the multiple object tracking paradigm: A tutorial review.
Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus
2017-07-01
Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.
Data Fusion for a Vision-Radiological System for Source Tracking and Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev
2015-07-01
A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less
An interactive VR system based on full-body tracking and gesture recognition
NASA Astrophysics Data System (ADS)
Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru
2016-10-01
Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.
Object tracking using plenoptic image sequences
NASA Astrophysics Data System (ADS)
Kim, Jae Woo; Bae, Seong-Joon; Park, Seongjin; Kim, Do Hyung
2017-05-01
Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
NASA Astrophysics Data System (ADS)
Schoonmaker, Jon; Reed, Scott; Podobna, Yuliya; Vazquez, Jose; Boucher, Cynthia
2010-04-01
Due to increased security concerns, the commitment to monitor and maintain security in the maritime environment is increasingly a priority. A country's coast is the most vulnerable area for the incursion of illegal immigrants, terrorists and contraband. This work illustrates the ability of a low-cost, light-weight, multi-spectral, multi-channel imaging system to handle the environment and see under difficult marine conditions. The system and its implemented detecting and tracking technologies should be organic to the maritime homeland security community for search and rescue, fisheries, defense, and law enforcement. It is tailored for airborne and ship based platforms to detect, track and monitor suspected objects (such as semi-submerged targets like marine mammals, vessels in distress, and drug smugglers). In this system, automated detection and tracking technology is used to detect, classify and localize potential threats or objects of interest within the imagery provided by the multi-spectral system. These algorithms process the sensor data in real time, thereby providing immediate feedback when features of interest have been detected. A supervised detection system based on Haar features and Cascade Classifiers is presented and results are provided on real data. The system is shown to be extendable and reusable for a variety of different applications.
Detection and laser ranging of orbital objects using optical methods
NASA Astrophysics Data System (ADS)
Wagner, P.; Hampf, D.; Sproll, F.; Hasenohr, T.; Humbert, L.; Rodmann, J.; Riede, W.
2016-09-01
Laser ranging to satellites (SLR) in earth orbit is an established technology used for geodesy, fundamental science and precise orbit determination. A combined active and passive optical measurement system using a single telescope mount is presented which performs precise ranging measurements of retro reflector equipped objects in low earth orbit (LEO). The German Aerospace Center (DLR) runs an observatory in Stuttgart where a system has been assembled completely from commercial off-the-shelf (COTS) components. The visible light directed to the tracking camera is used to perform angular measurements of objects under investigation. This is done astrometrically by comparing the apparent target position with cataloged star positions. First successful satellite laser ranging was demonstrated recently using an optical fiber directing laser pulses onto the astronomical mount. The transmitter operates at a wavelength of 1064 nm with a repetition rate of 3 kHz and pulse energy of 25 μJ. A motorized tip/tilt mount allows beam steering of the collimated beam with μrad accuracy. The returning photons reflected from the object in space are captured with the tracking telescope. A special low aberration beam splitter unit was designed to separate the infrared from visible light. This allows passive optical closed loop tracking and operation of a single photon detector for time of flight measurements at a single telescope simultaneously. The presented innovative design yields to a compact and cost effective but very precise ranging system which allows orbit determination.
ERIC Educational Resources Information Center
Rattanarungrot, Sasithorn; White, Martin; Newbury, Paul
2014-01-01
This paper describes the design of our service-oriented architecture to support mobile multiple object tracking augmented reality applications applied to education and learning scenarios. The architecture is composed of a mobile multiple object tracking augmented reality client, a web service framework, and dynamic content providers. Tracking of…
Bae, Seung-Hwan; Yoon, Kuk-Jin
2018-03-01
Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.
Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2016-05-01
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.
Three-dimensional tracking and imaging laser scanner for space operations
NASA Astrophysics Data System (ADS)
Laurin, Denis G.; Beraldin, J. A.; Blais, Francois; Rioux, Marc; Cournoyer, Luc
1999-05-01
This paper presents the development of a laser range scanner (LARS) as a three-dimensional sensor for space applications. The scanner is a versatile system capable of doing surface imaging, target ranging and tracking. It is capable of short range (0.5 m to 20 m) and long range (20 m to 10 km) sensing using triangulation and time-of-flight (TOF) methods respectively. At short range (1 m), the resolution is sub-millimeter and drops gradually with distance (2 cm at 10 m). For long range, the TOF provides a constant resolution of plus or minus 3 cm, independent of range. The LARS could complement the existing Canadian Space Vision System (CSVS) for robotic manipulation. As an active vision system, the LARS is immune to sunlight and adverse lighting; this is a major advantage over the CSVS, as outlined in this paper. The LARS could also replace existing radar systems used for rendezvous and docking. There are clear advantages of an optical system over a microwave radar in terms of size, mass, power and precision. Equipped with two high-speed galvanometers, the laser can be steered to address any point in a 30 degree X 30 degree field of view. The scanning can be continuous (raster scan, Lissajous) or direct (random). This gives the scanner the ability to register high-resolution 3D images of range and intensity (up to 4000 X 4000 pixels) and to perform point target tracking as well as object recognition and geometrical tracking. The imaging capability of the scanner using an eye-safe laser is demonstrated. An efficient fiber laser delivers 60 mW of CW or 3 (mu) J pulses at 20 kHz for TOF operation. Implementation of search and track of multiple targets is also demonstrated. For a single target, refresh rates up to 137 Hz is possible. Considerations for space qualification of the scanner are discussed. Typical space operations, such as docking, object attitude tracking, and inspections are described.
NASA Astrophysics Data System (ADS)
Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan
2017-01-01
This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.
The Retarding Force on a Fan-Cart Reversing Direction
ERIC Educational Resources Information Center
Aurora, Tarlok S.; Brunner, Bernard J.
2011-01-01
In introductory physics, students learn that an object tossed upward has a constant downward acceleration while going up, at the highest point and while falling down. To demonstrate this concept, a self-propelled fan cart system is used on a frictionless track. A quick push is given to the fan cart and it is allowed to move away on a track under…
2009-07-23
CAPE CANAVERAL, Fla. – In the Astrotech payload processing facility in Titusville, Fla. , technicians monitor the STSS Demonstrator SV-1 spacecraft as it is lowered to the orbital insertion system. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Tim Jacobs (Approved for Public Release 09-MDA-4800 [30 July 09] )
2009-07-23
CAPE CANAVERAL, Fla. – In the Astrotech payload processing facility in Titusville, Fla. , the STSS Demonstrator SV-1 spacecraft is lowered toward the orbital insertion system. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Tim Jacobs (Approved for Public Release 09-MDA-4800 [30 July 09] )
2009-07-23
CAPE CANAVERAL, Fla. – In the Astrotech payload processing facility in Titusville, Fla. , technicians monitor the STSS Demonstrator SV-1 spacecraft as it is lowered to the orbital insertion system. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Tim Jacobs (Approved for Public Release 09-MDA-4800 [30 July 09] )
2009-07-23
CAPE CANAVERAL, Fla. – In the Astrotech payload processing facility in Titusville, Fla. , the STSS Demonstrator SV-1 spacecraft is lowered to the orbital insertion system. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Tim Jacobs (Approved for Public Release 09-MDA-4800 [30 July 09] )
2009-07-23
CAPE CANAVERAL, Fla. – In the Astrotech payload processing facility in Titusville, Fla. , the STSS Demonstrator SV-1 spacecraft is moved toward the orbital insertion system. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Tim Jacobs (Approved for Public Release 09-MDA-4800 [30 July 09] )
2007-08-01
velopment of the first US missile-defense system, the Nike - Zeus, that was successfully tested in 1962. The Nike -Zeus system achieved several...discriminating the warhead from other objects, • tracking the warhead, • and then guiding the Nike -Zeus missile to the intercept point. Beyond...an effective kill vehicle. The quality of radar tracking was not adequate for a conven- tional warhead; therefore, the Nike -Zeus and all other ABM
Simultaneous localization and calibration for electromagnetic tracking systems.
Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor
2016-06-01
In clinical environments, field distortion can cause significant electromagnetic tracking errors. Therefore, dynamic calibration of electromagnetic tracking systems is essential to compensate for measurement errors. It is proposed to integrate the motion model of the tracked instrument with redundant EM sensor observations and to apply a simultaneous localization and mapping algorithm in order to accurately estimate the pose of the instrument and create a map of the field distortion in real-time. Experiments were conducted in the presence of ferromagnetic and electrically-conductive field distorting objects and results compared with those of a conventional sensor fusion approach. The proposed method reduced the tracking error from 3.94±1.61 mm to 1.82±0.62 mm in the presence of steel, and from 0.31±0.22 mm to 0.11±0.14 mm in the presence of aluminum. With reduced tracking error and independence from external tracking devices or pre-operative calibrations, the approach is promising for reliable EM navigation in various clinical procedures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin
2013-12-01
We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.
Catalogue Creation for Space Situational Awareness with Optical Sensors
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, I.; Bessell, T.; Rutten, M.; Gordon, N.; Moretti, N.; Morreale, B.
2016-09-01
In order to safeguard the continued use of space-based technologies, effective monitoring and tracking of man-made resident space objects (RSOs) is paramount. The diverse characteristics, behaviours and trajectories of RSOs make space surveillance a challenging application of the discipline that is tracking and surveillance. When surveillance systems are faced with non-canonical scenarios, it is common for human operators to intervene while researchers adapt and extend traditional tracking techniques in search of a solution. A complementary strategy for improving the robustness of space surveillance systems is to place greater emphasis on the anticipation of uncertainty. Namely, give the system the intelligence necessary to autonomously react to unforeseen events and to intelligently and appropriately act on tenuous information rather than discard it. In this paper we build from our 2015 campaign and describe the progression of a low-cost intelligent space surveillance system capable of autonomously cataloguing and maintaining track of RSOs. It currently exploits robotic electro-optical sensors, high-fidelity state-estimation and propagation as well as constrained initial orbit determination (IOD) to intelligently and adaptively manage its sensors in order to maintain an accurate catalogue of RSOs. In a step towards fully autonomous cataloguing, the system has been tasked with maintaining surveillance of a portion of the geosynchronous (GEO) belt. Using a combination of survey and track-refinement modes, the system is capable of maintaining a track of known RSOs and initiating tracks on previously unknown objects. Uniquely, due to the use of high-fidelity representations of a target's state uncertainty, as few as two images of previously unknown RSOs may be used to subsequently initiate autonomous search and reacquisition. To achieve this capability, particularly within the congested environment of the GEO-belt, we use a constrained admissible region (CAR) to generate a plausible estimate of the unknown RSO's state probability density function and disambiguate measurements using a particle-based joint probability data association (JPDA) method. Additionally, the use of alternative CAR generation methods, incorporating catalogue-based priors, is explored and tested. We also present the findings of two field trials of an experimental system that incorporates these techniques. The results demonstrate that such a system is capable of autonomously searching for an RSO that was briefly observed days prior in a GEO-survey and discriminating it from the measurements of other previously catalogued RSOs.
Relative Navigation of Formation-Flying Satellites
NASA Technical Reports Server (NTRS)
Long, Anne; Kelbel, David; Lee, Taesul; Leung, Dominic; Carpenter, J. Russell; Grambling, Cheryl
2002-01-01
This paper compares autonomous relative navigation performance for formations in eccentric, medium and high-altitude Earth orbits using Global Positioning System (GPS) Standard Positioning Service (SPS), crosslink, and celestial object measurements. For close formations, the relative navigation accuracy is highly dependent on the magnitude of the uncorrelated measurement errors. A relative navigation position accuracy of better than 10 centimeters root-mean-square (RMS) can be achieved for medium-altitude formations that can continuously track at least one GPS signal. A relative navigation position accuracy of better than 15 meters RMS can be achieved for high-altitude formations that have sparse tracking of the GPS signals. The addition of crosslink measurements can significantly improve relative navigation accuracy for formations that use sparse GPS tracking or celestial object measurements for absolute navigation.
NASA Astrophysics Data System (ADS)
DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.
2012-06-01
As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented. The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object characterization, jointly, in computing reward and cost for optimizing tasking decisions.
Measuring Positions of Objects using Two or More Cameras
NASA Technical Reports Server (NTRS)
Klinko, Steve; Lane, John; Nelson, Christopher
2008-01-01
An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.
Nevzgodina, L V; Kaminskaia, E V; Maksimova, E N; Fatsius, R; Sherrer, K; Shtraukh, V
2000-01-01
Experimental data on the effects of spaceflight factors, space radiation in particular, on higher plant Wolffia arrhiza firstly exposed in the "Bioblock" assembly and measurements made by physical track detectors of heavy ions (HI) are presented. Death of individual Wolffia plants and morphologic anomalies were the basic evaluation criteria. The peculiar feature of this biological object consists in the possibility to reveal delayed effects after 1-2 months since space flight as Wolffia has a high rate of vegetative reproduction. German investigators through microscopic examination of track detectors performed identification of individual plants affected by HI. With specially developed software and a coordinate system of supposition of biolayers and track detectors with the accuracy of 1 micron, tracks and even separate sections of individual HI tracks were determined in biological objects. Thereafter each Wolffia plant hit by HI was examined and data were compared with other variants. As a result, correlation between Wolffia death rate and morphologic anomalies were determined at different times post flight and topography of HI tracks was found. It is hypothesized that morphological anomalies in Walffia were caused by direct hits of plant germs by heavy ions or close passage of particles.
Al-Nawashi, Malek; Al-Hazaimeh, Obaida M; Saraee, Mohamad
2017-01-01
Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.
Li, Mengfei; Hansen, Christian; Rose, Georg
2017-09-01
Electromagnetic tracking systems (EMTS) have achieved a high level of acceptance in clinical settings, e.g., to support tracking of medical instruments in image-guided interventions. However, tracking errors caused by movable metallic medical instruments and electronic devices are a critical problem which prevents the wider application of EMTS for clinical applications. We plan to introduce a method to dynamically reduce tracking errors caused by metallic objects in proximity to the magnetic sensor coil of the EMTS. We propose a method using ramp waveform excitation based on modeling the conductive distorter as a resistance-inductance circuit. Additionally, a fast data acquisition method is presented to speed up the refresh rate. With the current approach, the sensor's positioning mean error is estimated to be 3.4, 1.3 and 0.7 mm, corresponding to a distance between the sensor and center of the transmitter coils' array of up to 200, 150 and 100 mm, respectively. The sensor pose error caused by different medical instruments placed in proximity was reduced by the proposed method to a level lower than 0.5 mm in position and [Formula: see text] in orientation. By applying the newly developed fast data acquisition method, we achieved a system refresh rate up to approximately 12.7 frames per second. Our software-based approach can be integrated into existing medical EMTS seamlessly with no change in hardware. It improves the tracking accuracy of clinical EMTS when there is a metallic object placed near the sensor coil and has the potential to improve the safety and outcome of image-guided interventions.
Communications and tracking expert systems study
NASA Technical Reports Server (NTRS)
Leibfried, T. F.; Feagin, Terry; Overland, David
1987-01-01
The original objectives of the study consisted of five broad areas of investigation: criteria and issues for explanation of communication and tracking system anomaly detection, isolation, and recovery; data storage simplification issues for fault detection expert systems; data selection procedures for decision tree pruning and optimization to enhance the abstraction of pertinent information for clear explanation; criteria for establishing levels of explanation suited to needs; and analysis of expert system interaction and modularization. Progress was made in all areas, but to a lesser extent in the criteria for establishing levels of explanation suited to needs. Among the types of expert systems studied were those related to anomaly or fault detection, isolation, and recovery.
NASA Astrophysics Data System (ADS)
Yoshimoto, Masahiro; Nakano, Toshiyuki; Komatani, Ryosuke; Kawahara, Hiroaki
2017-10-01
Automatic nuclear emulsion readout systems have seen remarkable progress since the original idea was developed almost 40 years ago. After the success of its full application to a large-scale neutrino experiment, OPERA, a much faster readout system, the hyper-track selector (HTS), has been developed. HTS, which has an extremely wide-field objective lens, reached a scanning speed of 4700 cm^2/h, which is nearly 100 times faster than the previous system and therefore strongly promotes many new experimental projects. We will describe the concept, specifications, system structure, and achieved performance in this paper.
Arnon, S; Rotman, S; Kopeika, N S
1997-08-20
The basic free-space optical communication system includes at least two satellites. To communicate between them, the transmitter satellite must track the beacon of the receiver satellite and point the information optical beam in its direction. Optical tracking and pointing systems for free space suffer during tracking from high-amplitude vibration because of background radiation from interstellar objects such as the Sun, Moon, Earth, and stars in the tracking field of view or the mechanical impact from satellite internal and external sources. The vibrations of beam pointing increase the bit error rate and jam communication between the two satellites. One way to overcome this problem is to increase the satellite receiver beacon power. However, this solution requires increased power consumption and weight, both of which are disadvantageous in satellite development. Considering these facts, we derive a mathematical model of a communication system that adapts optimally the transmitter beam width and the transmitted power to the tracking system performance. Based on this model, we investigate the performance of a communication system with discrete element optical phased array transmitter telescope gain. An example for a practical communication system between a Low Earth Orbit Satellite and a Geostationary Earth Orbit Satellite is presented. From the results of this research it can be seen that a four-element adaptive transmitter telescope is sufficient to compensate for vibration amplitude doubling. The benefits of the proposed model are less required transmitter power and improved communication system performance.
Adaptive learning compressive tracking based on Markov location prediction
NASA Astrophysics Data System (ADS)
Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan
2017-03-01
Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.
Design and Efficiency Analysis of Operational Scenarios for Space Situational Awareness Radar System
NASA Astrophysics Data System (ADS)
Choi, E. J.; Cho, S.; Jo, J. H.; Park, J.; Chung, T.; Park, J.; Jeon, H.; Yun, A.; Lee, Y.
In order to perform the surveillance and tracking of space objects, optical and radar sensors are the technical components for space situational awareness system. Especially, space situational awareness radar system in combination with optical sensors network plays an outstanding role for space situational awareness. At present, OWL-Net(Optical Wide Field patrol Network) optical system, which is the only infra structures for tracking of space objects in Korea is very limited in all-weather and observation time. Therefore, the development of radar system capable of continuous operation is becoming an essential space situational awareness element. Therefore, for an efficient space situational awareness at the current state, the strategy of the space situational awareness radar development should be considered. The purpose of this paper is to analyze the efficiency of radar system for detection and tracking of space objects. The detection capabilities are limited to an altitude of 2,000 km with debris size of 1 m2 in radar cross section (RCS) for the radar operating frequencies of L, S, C, X, and Ku-band. The power budget analysis results showed that the maximum detection range of 2,000km can be achieved with the transmitted power of 900 kW, transmit and receive antenna gains of 40 dB and 43 dB, respectively, pulse width of 2 ms, and a signal processing gain of 13.3dB, at frequency of 1.3GHz. The required signal-to-noise ratio (SNR) was assumed to be 12.6 dB for probability of detection of 80% with false alarm rate 10-6. Through the efficiency analysis and trade-off study, the key parameters of the radar system are designed. As a result, this research will provide the guideline for the conceptual design of space situational awareness system.
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is moved toward the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft, at left. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers observe as the SV1-SV2 spacecraft is lifted for weighing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1-SV2 spacecraft is ready to be weighed. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is lifted to be placed on the top of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers observe as the SV1-SV2 spacecraft is lifted for weighing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-27
CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft moves out of the Astrotech payload processing facility. It is being moved to Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers maneuver one of the second-row segments of the transportation canister that will be placed around the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-03
CAPE CANAVERAL, Fla. –At the Astrotech payload processing facility in Titusville, Fla., the SV1 spacecraft is lowered onto the SV2 for mating. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1 spacecraft is lowered onto the SV2 for mating. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1-SV2 spacecraft sits on the rotation stand after weighing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers begin center of gravity testing, weighing and balancing on the SV1-SV2 spacecraft. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is moved toward the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft, at bottom left. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers place the second row of segments of the transportation canister around the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1 spacecraft is lowered toward the SV2 for mating. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers attach the upper segment of the transportation canister to the lower segments around the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers place the first segments of the transportation canister around the base of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers check the mating of the SV1 spacecraft onto the SV2. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1-SV2 spacecraft is lifted for weighing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the SV1 and SV2 spacecraft are ready for mating for launch. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. The spacecraft is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft is under a protective cover before being encased in the transportation canister. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-09-25
CAPE CANAVERAL, Fla. – The United Launch Alliance Delta II rocket with Space Tracking and Surveillance System - Demonstrator, or STSS-Demo, spacecraft aboard races into the sky leaving a trail of fire and smoke after liftoff from Launch Pad 17-B at Cape Canaveral Air Force Station. It was launched by NASA for the U.S. Missile Defense Agency. Launch was at 8:20:22 a.m. EDT. The STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. Photo credit: NASA/Jack Pfaller
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers maneuver one of the second-row segments of the transportation canister that will be placed around the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers prepare to lift the SV1 and mate it to the SV2 spacecraft for the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. The spacecraft is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
Vision-based overlay of a virtual object into real scene for designing room interior
NASA Astrophysics Data System (ADS)
Harasaki, Shunsuke; Saito, Hideo
2001-10-01
In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).
Automated Planar Tracking the Waving Bodies of Multiple Zebrafish Swimming in Shallow Water.
Wang, Shuo Hong; Cheng, Xi En; Qian, Zhi-Ming; Liu, Ye; Chen, Yan Qiu
2016-01-01
Zebrafish (Danio rerio) is one of the most widely used model organisms in collective behavior research. Multi-object tracking with high speed camera is currently the most feasible way to accurately measure their motion states for quantitative study of their collective behavior. However, due to difficulties such as their similar appearance, complex body deformation and frequent occlusions, it is a big challenge for an automated system to be able to reliably track the body geometry of each individual fish. To accomplish this task, we propose a novel fish body model that uses a chain of rectangles to represent fish body. Then in detection stage, the point of maximum curvature along fish boundary is detected and set as fish nose point. Afterwards, in tracking stage, we firstly apply Kalman filter to track fish head, then use rectangle chain fitting to fit fish body, which at the same time further judge the head tracking results and remove the incorrect ones. At last, a tracklets relinking stage further solves trajectory fragmentation due to occlusion. Experiment results show that the proposed tracking system can track a group of zebrafish with their body geometry accurately even when occlusion occurs from time to time.
Automated Planar Tracking the Waving Bodies of Multiple Zebrafish Swimming in Shallow Water
Wang, Shuo Hong; Cheng, Xi En; Qian, Zhi-Ming; Liu, Ye; Chen, Yan Qiu
2016-01-01
Zebrafish (Danio rerio) is one of the most widely used model organisms in collective behavior research. Multi-object tracking with high speed camera is currently the most feasible way to accurately measure their motion states for quantitative study of their collective behavior. However, due to difficulties such as their similar appearance, complex body deformation and frequent occlusions, it is a big challenge for an automated system to be able to reliably track the body geometry of each individual fish. To accomplish this task, we propose a novel fish body model that uses a chain of rectangles to represent fish body. Then in detection stage, the point of maximum curvature along fish boundary is detected and set as fish nose point. Afterwards, in tracking stage, we firstly apply Kalman filter to track fish head, then use rectangle chain fitting to fit fish body, which at the same time further judge the head tracking results and remove the incorrect ones. At last, a tracklets relinking stage further solves trajectory fragmentation due to occlusion. Experiment results show that the proposed tracking system can track a group of zebrafish with their body geometry accurately even when occlusion occurs from time to time. PMID:27128096
Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua
2013-12-01
This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.
Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features
NASA Astrophysics Data System (ADS)
Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique
2011-12-01
We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.
Tricarico, Christopher; Peters, Robert; Som, Avik; Javaherian, Kavon
2017-01-01
Background Medication adherence remains a difficult problem to both assess and improve in patients. It is a multifactorial problem that goes beyond the commonly cited reason of forgetfulness. To date, eHealth (also known as mHealth and telehealth) interventions to improve medication adherence have largely been successful in improving adherence. However, interventions to date have used time- and cost-intensive strategies or focused solely on medication reminding, leaving much room for improvement in using a modality as flexible as eHealth. Objective Our objective was to develop and implement a fully automated short message service (SMS)-based medication adherence system, EpxMedTracking, that reminds patients to take their medications, explores reasons for missed doses, and alerts providers to help address problems of medication adherence in real time. Methods EpxMedTracking is a fully automated bidirectional SMS-based messaging system with provider involvement that was developed and implemented through Epharmix, Inc. Researchers analyzed 11 weeks of de-identified data from patients cared for by multiple provider groups in routine community practice for feasibility and functionality. Patients included were those in the care of a provider purchasing the EpxMedTracking tool from Epharmix and were enrolled from a clinic by their providers. The primary outcomes assessed were the rate of engagement with the system, reasons for missing doses, and self-reported medication adherence. Results Of the 25 patients studied over the 11 weeks, 3 never responded and subsequently opted out or were deleted by their provider. No other patients opted out or were deleted during the study period. Across the 11 weeks of the study period, the overall weekly engagement rate was 85.9%. There were 109 total reported missed doses including “I forgot” at 33 events (30.3%), “I felt better” at 29 events (26.6%), “out of meds” at 20 events (18.4%), “I felt sick” at 19 events (17.4%), and “other” at 3 events (2.8%). We also noted an increase in self-reported medication adherence in patients using the EpxMedTracking system. Conclusions EpxMedTracking is an effective tool for tracking self-reported medication adherence over time. It uniquely identifies actionable reasons for missing doses for subsequent provider intervention in real time based on patient feedback. Patients enrolled on EpxMedTracking also self-report higher rates of medication adherence over time while on the system. PMID:28506954
Anser EMT: the first open-source electromagnetic tracking platform for image-guided interventions.
Jaeger, Herman Alexander; Franz, Alfred Michael; O'Donoghue, Kilian; Seitel, Alexander; Trauzettel, Fabian; Maier-Hein, Lena; Cantillon-Murphy, Pádraig
2017-06-01
Electromagnetic tracking is the gold standard for instrument tracking and navigation in the clinical setting without line of sight. Whilst clinical platforms exist for interventional bronchoscopy and neurosurgical navigation, the limited flexibility and high costs of electromagnetic tracking (EMT) systems for research investigations mitigate against a better understanding of the technology's characterisation and limitations. The Anser project provides an open-source implementation for EMT with particular application to image-guided interventions. This work provides implementation schematics for our previously reported EMT system which relies on low-cost acquisition and demodulation techniques using both National Instruments and Arduino hardware alongside MATLAB support code. The system performance is objectively compared to other commercial tracking platforms using the Hummel assessment protocol. Positional accuracy of 1.14 mm and angular rotation accuracy of [Formula: see text] are reported. Like other EMT platforms, Anser is susceptible to tracking errors due to eddy current and ferromagnetic distortion. The system is compatible with commercially available EMT sensors as well as the Open Network Interface for image-guided therapy (OpenIGTLink) for easy communication with visualisation and medical imaging toolkits such as MITK and 3D Slicer. By providing an open-source platform for research investigations, we believe that novel and collaborative approaches can overcome the limitations of current EMT technology.
Asset tracking: what it is and whether it's right for you.
2006-10-01
This Guidance Article examines asset tracking technology, reviewing what it is, how it works, and how it can be applied in the healthcare setting to help hospitals better manage medical equipment. The article also offers guidance to help healthcare facilities determine whether (or when) they should consider investing in asset tracking technology. Asset tracking refers to the ability to detect, identify, and locate assets-infusion pumps, wheelchairs, or just about any other object or device--at any time, as well as to record the physical locations of those assets over time. Though already commonplace in some industries, tracking technology is still relatively new to healthcare. As a result, the systems, the companies that supply them, and even the applications for which they can be used are still evolving. While some healthcare facilities could see almost immediate benefits from implementing an asset tracking system now, others would benefit from waiting a little while for the marketplace to develop further. This article provides information to help hospitals determine which option will be best for them. For facilities that choose to start the system selection process now, we outline factors that should be considered.
Multiple objects tracking in fluorescence microscopy.
Kalaidzidis, Yannis
2009-01-01
Many processes in cell biology are connected to the movement of compact entities: intracellular vesicles and even single molecules. The tracking of individual objects is important for understanding cellular dynamics. Here we describe the tracking algorithms which have been developed in the non-biological fields and successfully applied to object detection and tracking in biological applications. The characteristics features of the different algorithms are compared.
ERIC Educational Resources Information Center
Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara
2016-01-01
The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…
Thermal bioaerosol cloud tracking with Bayesian classification
NASA Astrophysics Data System (ADS)
Smith, Christian W.; Dupuis, Julia R.; Schundler, Elizabeth C.; Marinelli, William J.
2017-05-01
The development of a wide area, bioaerosol early warning capability employing existing uncooled thermal imaging systems used for persistent perimeter surveillance is discussed. The capability exploits thermal imagers with other available data streams including meteorological data and employs a recursive Bayesian classifier to detect, track, and classify observed thermal objects with attributes consistent with a bioaerosol plume. Target detection is achieved based on similarity to a phenomenological model which predicts the scene-dependent thermal signature of bioaerosol plumes. Change detection in thermal sensor data is combined with local meteorological data to locate targets with the appropriate thermal characteristics. Target motion is tracked utilizing a Kalman filter and nearly constant velocity motion model for cloud state estimation. Track management is performed using a logic-based upkeep system, and data association is accomplished using a combinatorial optimization technique. Bioaerosol threat classification is determined using a recursive Bayesian classifier to quantify the threat probability of each tracked object. The classifier can accept additional inputs from visible imagers, acoustic sensors, and point biological sensors to improve classification confidence. This capability was successfully demonstrated for bioaerosol simulant releases during field testing at Dugway Proving Grounds. Standoff detection at a range of 700m was achieved for as little as 500g of anthrax simulant. Developmental test results will be reviewed for a range of simulant releases, and future development and transition plans for the bioaerosol early warning platform will be discussed.
Next Generation Waste Tracking: Linking Legacy Systems with Modern Networking Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Randy M.; Resseguie, David R.; Shankar, Mallikarjun
2010-01-01
This report describes results from a preliminary analysis to satisfy the Department of Energy (DOE) objective to ensure the safe, secure, efficient packaging and transportation of materials both hazardous and non hazardous [1, 2]. The DOE Office of Environmental Management (OEM) through Oak Ridge National Laboratory (ORNL) has embarked on a project to further this objective. OEM and ORNL have agreed to develop, demonstrate and make available modern day cost effective technologies for characterization, identification, tracking, monitoring and disposal of radioactive waste when transported by, or between, motor, air, rail, and water modes. During the past 8 years ORNL hasmore » investigated and deployed Web 2.0 compliant sensors into the transportation segment of the supply chain. ORNL has recently demonstrated operational experience with DOE Oak Ridge Operations Office (ORO) and others in national test beds and applications within this domain of the supply chain. Furthermore, in addition to DOE, these hazardous materials supply chain partners included Federal and State enforcement agencies, international ports, and commercial sector shipping operations in a hazardous/radioactive materials tracking and monitoring program called IntelligentFreight. IntelligentFreight is an ORNL initiative encompassing 5 years of research effort associated with the supply chain. The ongoing ORNL SmartFreight programs include RadSTraM [3], GRadSTraM , Trusted Corridors, SensorPedia [4], SensorNet, Southeastern Transportation Corridor Pilot (SETCP) and Trade Data Exchange [5]. The integration of multiple technologies aimed at safer more secure conveyance has been investigated with the core research question being focused on testing distinctly different distributed supply chain information sharing systems. ORNL with support from ORO have demonstrated capabilities when transporting Environmental Management (EM) waste materials for disposal over an onsite haul road. ORNL has unified the operations of existing legacy hazardous, radioactive and related informational databases and systems using emerging Web 2.0 technologies. These capabilities were used to interoperate ORNL s waste generating, packaging, transportation and disposal with other DOE ORO waste management contractors. Importantly, the DOE EM objectives were accomplished in a cost effective manner without altering existing information systems. A path forward is to demonstrate and share these technologies with DOE EM, contractors and stakeholders. This approach will not alter existing DOE assets, i.e. Automated Traffic Management Systems (ATMS), Transportation Tracking and Communications System (TRANSCOM), the Argonne National Laboratory (ANL) demonstrated package tracking system, etc« less
Assessment of input-output properties and control of neuroprosthetic hand grasp.
Hines, A E; Owens, N E; Crago, P E
1992-06-01
Three tests have been developed to evaluate rapidly and quantitatively the input-output properties and patient control of neuroprosthetic hand grasp. Each test utilizes a visual pursuit tracking task during which the subject controls the grasp force and grasp opening (position) of the hand. The first test characterizes the static input-output properties of the hand grasp, where the input is a slowly changing patient generated command signal and the outputs are grasp force and grasp opening. Nonlinearities and inappropriate slopes have been documented in these relationships, and in some instances the need for system returning has been indicated. For each subject larger grasp forces were produced when grasping larger objects, and for some subjects the shapes of the relationships also varied with object size. The second test quantifies the ability of the subject to control the hand grasp outputs while tracking steps and ramps. Neuroprosthesis users had rms errors two to three times larger when tracking steps versus ramps, and had rms errors four to five times larger than normals when tracking ramps. The third test provides an estimate of the frequency response of the hand grasp system dynamics, from input and output data collected during a random tracking task. Transfer functions were estimated by spectral analysis after removal of the static input-output nonlinearities measured in the first test. The dynamics had low-pass filter characteristics with 3 dB cutoff frequencies from 1.0 to 1.4 Hz. The tests developed in this study provide a rapid evaluation of both the system and the user. They provide information to 1) help interpret subject performance of functional tasks, 2) evaluate the efficacy of system features such as closed-loop control, and 3) screen the neuroprosthesis to indicate the need for retuning.
NASA Astrophysics Data System (ADS)
Zhang, Menghua; Ma, Xin; Rong, Xuewen; Tian, Xincheng; Li, Yibin
2017-02-01
This paper exploits an error tracking control method for overhead crane systems for which the error trajectories for the trolley and the payload swing can be pre-specified. The proposed method does not require that the initial payload swing angle remains zero, whereas this requirement is usually assumed in conventional methods. The significant feature of the proposed method is its superior control performance as well as its strong robustness over different or uncertain rope lengths, payload masses, desired positions, initial payload swing angles, and external disturbances. Owing to the same attenuation behavior, the desired error trajectory for the trolley for each traveling distance is not needed to be reset, which is easy to implement in practical applications. By converting the error tracking overhead crane dynamics to the objective system, we obtain the error tracking control law for arbitrary initial payload swing angles. Lyapunov techniques and LaSalle's invariance theorem are utilized to prove the convergence and stability of the closed-loop system. Simulation and experimental results are illustrated to validate the superior performance of the proposed error tracking control method.
COBE navigation with one-way return-link Doppler in the post-helium-venting phase
NASA Technical Reports Server (NTRS)
Dunham, Joan; Nemesure, M.; Samii, M. V.; Maher, M.; Teles, Jerome; Jackson, J.
1991-01-01
The results of a navigation experiment with one way return link Doppler tracking measurements for operational orbit determination of the Cosmic Background Explorer (COBE) spacecraft are presented. The frequency of the tracking signal for the one way measurements was stabilized with an Ultrastable Oscillator (USO), and the signal was relayed by the Tracking and Data Relay Satellite System (TDRSS). The study achieved three objectives: space qualification of TDRSS noncoherent one way return link Doppler tracking; determination of flight performance of the USO coupled to the second generation TDRSS compatible user transponder; and verification of algorithms for navigation using actual one way tracking data. Orbit determination and the inflight USO performance evaluation results are presented.
Yang, Ehwa; Gwak, Jeonghwan; Jeon, Moongu
2017-01-01
Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable. PMID:28304366
Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos
NASA Astrophysics Data System (ADS)
Juneja, Medha; Grover, Priyanka
2013-12-01
Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.
Modeling the lateral load distribution for multiple concrete crossties and fastening systems.
DOT National Transportation Integrated Search
2017-01-31
The objective of this project was to further investigate the performance of concrete crosstie and : fastening system under vertical and lateral wheel load using finite element analysis, and explore : possible improvement for current track design stan...
NASA Astrophysics Data System (ADS)
Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.
2016-09-01
Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.
Evidence against a speed limit in multiple-object tracking.
Franconeri, S L; Lin, J Y; Pylyshyn, Z W; Fisher, B; Enns, J T
2008-08-01
Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.
Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System
NASA Astrophysics Data System (ADS)
Bethmann, F.; Luhmann, T.
2012-07-01
The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.
NASA Astrophysics Data System (ADS)
Ladd, D.; Reeves, R.; Rumi, E.; Trethewey, M.; Fortescue, M.; Appleby, G.; Wilkinson, M.; Sherwood, R.; Ash, A.; Cooper, C.; Rayfield, P.
The Science and Technology Facilities Council (STFC), Control Loop Concepts Limited (CL2), Natural Environment Research Council (NERC) and Defence Science and Technology Laboratory (DSTL), have recently participated in a campaign of satellite observations, with both radar and optical sensors, in order to demonstrate an initial network concept that enhances the value of coordinated observations. STFC and CL2 have developed a Space Surveillance and Tracking (SST) server/client architecture to slave one sensor to another. The concept was originated to enable the Chilbolton radar (an S-band radar on a 25 m diameter fully-steerable dish antenna called CASTR – Chilbolton Advanced Satellite Tracking Radar) which does not have an auto-track function to follow an object based on position data streamed from another cueing sensor. The original motivation for this was to enable tracking during re-entry of ATV-5, a highly manoeuvrable ISS re-supply vessel. The architecture has been designed to be extensible and allows the interface of both optical and radar sensors which may be geographically separated. Connectivity between the sensors is TCP/IP over the internet. The data transferred between the sensors is translated into an Earth centred frame of reference to accommodate the difference in location, and time-stamping and filtering are applied to cope with latency. The server can accept connections from multiple clients, and the operator can switch between the different clients. This architecture is inherently robust and will enable graceful degradation should parts of the system be unavailable. A demonstration was conducted in 2016 whereby a small telescope connected to an agile mount (an EO tracker known as COATS - Chilbolton Optical Advanced Tracking System) located 50m away from the radar at Chilbolton, autonomously tracked several objects and fed the look angle data into a client. CASTR, slaved to COATS through the server followed and successfully detected the objects. In 2017, the baseline was extended to 135 km by developing a client for the SLR (satellite laser ranger) telescope at the Space Geodesy Facility, Herstmonceux. Trials have already demonstrated that CASTR can accurately track the object using the position data being fed from the SLR.
Object classification for obstacle avoidance
NASA Astrophysics Data System (ADS)
Regensburger, Uwe; Graefe, Volker
1991-03-01
Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.
Brain Activation during Spatial Updating and Attentive Tracking of Moving Targets
ERIC Educational Resources Information Center
Jahn, Georg; Wendt, Julia; Lotze, Martin; Papenmeier, Frank; Huff, Markus
2012-01-01
Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed…
Algorithms for detection of objects in image sequences captured from an airborne imaging system
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak
1995-01-01
This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.
NASA Technical Reports Server (NTRS)
Wilson, D. J.; Krause, M. C.; Craven, C. E.; Edwards, B. B.; Coffey, E. W.; Huang, C. C.; Jetton, J. L.; Morrison, L. K.
1974-01-01
A program plan for system evaluation of the two-dimensional Scanning Laser Doppler System (SLDS) is presented. In order to meet system evaluation and optimization objectives the following tests were conducted: (1) noise tests; (2) wind tests; (3) blower flowfield tests; (4) single unit (1-D) flyby tests; and (5) dual unit (2-D) flyby tests. Test results are reported. The final phase of the program included logistics preparation, equipment interface checkouts, and data processing. It is concluded that the SLDS is capable of accurately tracking aircraft wake vortices from small or large aircraft, and in any type of weather.
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
A coarse-to-fine kernel matching approach for mean-shift based visual tracking
NASA Astrophysics Data System (ADS)
Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.
2009-03-01
Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.
Object-oriented model-driven control
NASA Technical Reports Server (NTRS)
Drysdale, A.; Mcroberts, M.; Sager, J.; Wheeler, R.
1994-01-01
A monitoring and control subsystem architecture has been developed that capitalizes on the use of modeldriven monitoring and predictive control, knowledge-based data representation, and artificial reasoning in an operator support mode. We have developed an object-oriented model of a Controlled Ecological Life Support System (CELSS). The model based on the NASA Kennedy Space Center CELSS breadboard data, tracks carbon, hydrogen, and oxygen, carbodioxide, and water. It estimates and tracks resorce-related parameters such as mass, energy, and manpower measurements such as growing area required for balance. We are developing an interface with the breadboard systems that is compatible with artificial reasoning. Initial work is being done on use of expert systems and user interface development. This paper presents an approach to defining universally applicable CELSS monitor and control issues, and implementing appropriate monitor and control capability for a particular instance: the KSC CELSS Breadboard Facility.
MRI-based dynamic tracking of an untethered ferromagnetic microcapsule navigating in liquid
NASA Astrophysics Data System (ADS)
Dahmen, Christian; Belharet, Karim; Folio, David; Ferreira, Antoine; Fatikow, Sergej
2016-04-01
The propulsion of ferromagnetic objects by means of MRI gradients is a promising approach to enable new forms of therapy. In this work, necessary techniques are presented to make this approach work. This includes path planning algorithms working on MRI data, ferromagnetic artifact imaging and a tracking algorithm which delivers position feedback for the ferromagnetic objects, and a propulsion sequence to enable interleaved magnetic propulsion and imaging. Using a dedicated software environment, integrating path-planning methods and real-time tracking, a clinical MRI system is adapted to provide this new functionality for controlled interventional targeted therapeutic applications. Through MRI-based sensing analysis, this article aims to propose a framework to plan a robust pathway to enhance the navigation ability to reach deep locations in the human body. The proposed approaches are validated with different experiments.
Multi-Complementary Model for Long-Term Tracking
Zhang, Deng; Zhang, Junchang; Xia, Chenyang
2018-01-01
In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170
NASA Technical Reports Server (NTRS)
Barnum, P. W.; Renzetti, N. A.; Textor, G. P.; Kelly, L. B.
1973-01-01
The Tracking and Data System (TDS) Support for the Mariner Mars 1971 Mission final report contains the deep space tracking and data acquisition activities in support of orbital operations. During this period a major NASA objective was accomplished: completion of the 180th revolution and 90th day of data gathering with the spacecraft about the planet Mars. Included are presentations of the TDS flight support pass chronology data for each of the Deep Space Stations used, and performance evaluation for the Deep Space Network Telemetry, Tracking, Command, and Monitor Systems. With the loss of Mariner 8 at launch, Mariner 9 assumed the mission plan of Mariner 8, which included the TV mapping cycles and a 12-hr orbital period. The mission plan was modified as a result of a severe dust storm on the surface of Mars, which delayed the start of the TV mapping cycles. Thus, the end of primary mission date was extended to complete the TV mapping cycles.
3D Visual Tracking of an Articulated Robot in Precision Automated Tasks
Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.
2017-01-01
The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860
Multiple objects tracking with HOGs matching in circular windows
NASA Astrophysics Data System (ADS)
Miramontes-Jaramillo, Daniel; Kober, Vitaly; Díaz-Ramírez, Víctor H.
2014-09-01
In recent years tracking applications with development of new technologies like smart TVs, Kinect, Google Glass and Oculus Rift become very important. When tracking uses a matching algorithm, a good prediction algorithm is required to reduce the search area for each object to be tracked as well as processing time. In this work, we analyze the performance of different tracking algorithms based on prediction and matching for a real-time tracking multiple objects. The used matching algorithm utilizes histograms of oriented gradients. It carries out matching in circular windows, and possesses rotation invariance and tolerance to viewpoint and scale changes. The proposed algorithm is implemented in a personal computer with GPU, and its performance is analyzed in terms of processing time in real scenarios. Such implementation takes advantage of current technologies and helps to process video sequences in real-time for tracking several objects at the same time.
Gaia-GBOT asteroid finding programme (gbot.obspm.fr)
NASA Astrophysics Data System (ADS)
Bouquillon, Sébastien; Altmann, Martin; Taris, Francois; Barache, Christophe; Carlucci, Teddy; Tanga, Paolo; Thuillot, William; Marchant, Jon; Steele, Iain; Lister, Tim; Berthier, Jerome; Carry, Benoit; David, Pedro; Cellino, Alberto; Hestroffer, Daniel J.; Andrei, Alexandre Humberto; Smart, Ricky
2016-10-01
The Ground Based Optical Tracking group (GBOT) consists of about ten scientists involved in the Gaia mission by ESA. Its main task is the optical tracking of the Gaia satellite itself [1]. This novel tracking method in addition to radiometric standard ones is necessary to ensure that the Gaia mission goal in terms of astrometric precision level is reached for all objects. This optical tracking is based on daily observations performed throughout the mission by using the optical CCDs of ESO's VST in Chile, of Liverpool Telescope in La Palma and of the two LCOGT's Faulkes Telescopes in Hawaii and Australia. Each night, GBOT attempts to obtain a sequence of frames covering a 20 min total period and close to Gaia meridian transit time. In each sequence, Gaia is seen as a faint moving object (Rmag ~ 21, speed > 1"/min) and its daily astrometric accuracy has to be better than 0.02" to meet the Gaia mission requirements. The GBOT Astrometric Reduction Pipeline (GARP) [2] has been specifically developed to reach this precision.More recently, a secondary task has been assigned to GBOT which consists detecting and analysing Solar System Objects (SSOs) serendipitously recorded in the GBOT data. Indeed, since Gaia oscillates around the Sun-Earth L2 point, the fields of GBOT observations are near the Ecliptic and roughly located opposite to the Sun which is advantageous for SSO observations and studies. In particular, these SSO data can potentially be very useful to help in the determination of their absolute magnitudes, with important applications to the scientific exploitation of the WISE and Gaia missions. For these reasons, an automatic SSO detection system has been created to identify moving objects in GBOT sequences of observations. Since the beginning of 2015, this SSO detection system, added to GARP for performing high precision astrometry for SSOs, is fully operational. To this date, around 9000 asteroids have been detected. The mean delay between the time of observation and the submission of the SSO reduction results to the MPC is less than 12 hours allowing rapid follow up of new objects.[1] Altmann et al. 2014, SPIE, 9149.[2] Bouquillon et al. 2014, SPIE, 9152.
Fast track lunar NTR systems assessment for the First Lunar Outpost and its evolvability to Mars
NASA Technical Reports Server (NTRS)
Borowski, Stanley K.; Alexander, Stephen W.
1992-01-01
The objectives of the 'fast track' lunar Nuclear Thermal Rocket (NTR) analysis are to quantify necessary engine/stage characteristics to perform NASA's 'First Lunar Outpost' scenario and to assess the potential for evolution to Mars mission applications. By developing NTR/stage technologies for use in NASA's 'First Lunar Outpost' scenario, NASA will make a major down payment on the key components needed for the follow-on Mars Space Transportation System. A faster, cheaper approach to overall lunar/Mars exploration is expected.
2009-07-23
CAPE CANAVERAL, Fla. – In the Astrotech payload processing facility in Titusville, Fla. , technicians check equipment on the STSS Demonstrator SV-1 spacecraft after it was lowered onto the orbital insertion system. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Tim Jacobs (Approved for Public Release 09-MDA-4800 [30 July 09] )
Multiple Object Tracking Reveals Object-Based Grouping Interference in Children with ASD
ERIC Educational Resources Information Center
Van der Hallen, Ruth; Evers, Kris; de-Wit, Lee; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2018-01-01
The multiple object tracking (MOT) paradigm has proven its value in targeting a number of aspects of visual cognition. This study used MOT to investigate the effect of object-based grouping, both in children with and without autism spectrum disorder (ASD). A modified MOT task was administered to both groups, who had to track and distinguish four…
Tracking Object Existence From an Autonomous Patrol Vehicle
NASA Technical Reports Server (NTRS)
Wolf, Michael; Scharenbroich, Lucas
2011-01-01
An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.
NASA Astrophysics Data System (ADS)
Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri
2012-01-01
An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.
NASA Astrophysics Data System (ADS)
Blasch, Erik; Pham, Khanh D.; Shen, Dan; Chen, Genshe
2018-05-01
The dynamic data-driven applications systems (DDDAS) paradigm is meant to inject measurements into the execution model for enhanced systems performance. One area off interest in DDDAS is for space situation awareness (SSA). For SSA, data is collected about the space environment to determine object motions, environments, and model updates. Dynamically coupling between the data and models enhances the capabilities of each system by complementing models with data for system control, execution, and sensor management. The paper overviews some of the recent developments in SSA made possible from DDDAS techniques which are for object detection, resident space object tracking, atmospheric models for enhanced sensing, cyber protection, and information management.
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.
1998-07-01
An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.
Stereo vision tracking of multiple objects in complex indoor environments.
Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro
2010-01-01
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.
Object tracking via background subtraction for monitoring illegal activity in crossroad
NASA Astrophysics Data System (ADS)
Ghimire, Deepak; Jeong, Sunghwan; Park, Sang Hyun; Lee, Joonwhoan
2016-07-01
In the field of intelligent transportation system a great number of vision-based techniques have been proposed to prevent pedestrians from being hit by vehicles. This paper presents a system that can perform pedestrian and vehicle detection and monitoring of illegal activity in zebra crossings. In zebra crossing, according to the traffic light status, to fully avoid a collision, a driver or pedestrian should be warned earlier if they possess any illegal moves. In this research, at first, we detect the traffic light status of pedestrian and monitor the crossroad for vehicle pedestrian moves. The background subtraction based object detection and tracking is performed to detect pedestrian and vehicles in crossroads. Shadow removal, blob segmentation, trajectory analysis etc. are used to improve the object detection and classification performance. We demonstrate the experiment in several video sequences which are recorded in different time and environment such as day time and night time, sunny and raining environment. Our experimental results show that such simple and efficient technique can be used successfully as a traffic surveillance system to prevent accidents in zebra crossings.
Time-Resolved CubeSat Photometry with a Low Cost Electro-Optics System
NASA Astrophysics Data System (ADS)
Gasdia, F.; Barjatya, A.; Bilardi, S.
2016-09-01
Once the orbits of small debris or CubeSats are determined, optical rate-track follow-up observations can provide information for characterization or identification of these objects. Using the Celestron 11" RASA telescope and an inexpensive CMOS machine vision camera, we have obtained time-series photometry from dozens of passes of small satellites and CubeSats over sites in Florida and Massachusetts. The fast readout time of the CMOS detector allows temporally resolved sampling of glints from small wire antennae and structural facets of rapidly tumbling objects. Because the shape of most CubeSats is known, these light curves can be used in a mission support function for small satellite operators to diagnose or verify the proper functioning of an attitude control system or deployed antenna or instrument. We call this telescope system and the accompanying analysis tools OSCOM for Optical tracking and Spectral characterization of CubeSats for Operational Missions. We introduce the capability of OSCOM for space object characterization, and present photometric observations demonstrating the potential of high frame rate small satellite photometry.
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness.
Sahar, Liora; Faler, Guy; Hristov, Emil; Hughes, Susan; Lee, Leslie; Westnedge, Caroline; Erickson, Benjamin; Nichols, Barbara
2015-01-01
Objective To bridge gaps identified during the 2009 H1N1 influenza pandemic by developing a system that provides public health departments improved capability to manage and track medical countermeasures at the state and local levels and to report their inventory levels to the Centers for Disease Control and Prevention (CDC). Materials and Methods The CDC Countermeasure Tracking Systems (CTS) program designed and implemented the Inventory Management and Tracking System (IMATS) to manage, track, and report medical countermeasure inventories at the state and local levels. IMATS was designed by CDC in collaboration with state and local public health departments to ensure a “user-centered design approach.” A survey was completed to assess functionality and user satisfaction. Results IMATS was deployed in September 2011 and is provided at no cost to public health departments. Many state and local public health departments nationwide have adopted IMATS and use it to track countermeasure inventories during public health emergencies and daily operations. Discussion A successful response to public health emergencies requires efficient, accurate reporting of countermeasure inventory levels. IMATS is designed to support both emergency operations and everyday activities. Future improvements to the system include integrating barcoding technology and streamlining user access. To maintain system readiness, we continue to collect user feedback, improve technology, and enhance its functionality. Conclusion IMATS satisfies the need for a system for monitoring and reporting health departments’ countermeasure quantities so that decision makers are better informed. The “user-centered design approach” was successful, as evident by the many public health departments that adopted IMATS. PMID:26392843
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers remove a cover from around the mated SV1 and SV2 spacecraft before center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the mated SV1 and SV2 spacecraft are largely uncovered before center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is lowered toward the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. It will be installed onto the lower segments already in place. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers remove covers around the mated SV1 and SV2 spacecraft before center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the mated SV1 and SV2 spacecraft are on a rotation stand for center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., an overhead crane with a scale is being attached to the SV1-SV2 spacecraft, which will be weighed. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers check the SV1-SV2 spacecraft that will undergo center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers remove covers around the mated SV1 and SV2 spacecraft before center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., an overhead crane with a scale is being attached to the SV1-SV2 spacecraft, which will be weighed. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. –At the Astrotech payload processing facility in Titusville, Fla., the mated SV1 and SV2 spacecraft are being prepared for center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is lowered over the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. It will be installed onto the lower segments already in place. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers observe as the SV1-SV2 spacecraft is lowered again onto the rotation stand after weighing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., a crane moves the SV1 spacecraft, which will be mated with the SV2 at right. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. The spacecraft is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., a crane moves the SV1 spacecraft, toward the SV2 at right. The two spacecraft , which will be mated, are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-09-12
CAPE CANAVERAL, Fla. – The two halves of the fairing are moved into the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station in Florida. The two-part fairing will be placed around the Space Tracking and Surveillance System – Demonstrator spacecraft for protection during launch. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-4934 (09-22-09) Photo credit: NASA/Cory Huston
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers help guide the movement of the SV1 spacecraft as it is moved toward the SV2 at right. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers help guide the movement of the SV1 spacecraft as it is moved toward the SV2 behind it. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., a canister and protective cover are being prepared for placement around the SV1-SV2 spacecraft. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers observe as the SV1 spacecraft is lowered onto the SV2 for mating. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., a worker checks the mating of the SV1 spacecraft onto the SV2. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the mated SV1 and SV2 spacecraft are being prepared for center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the mated SV1 and SV2 spacecraft are being prepared for center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., an overhead crane with a scale is being attached to the SV1-SV2 spacecraft, which will be weighed. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers help guide the movement of the SV1 spacecraft as it is moved toward the SV2 at right. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., an overhead crane with a scale is being moved to attach to the SV1-SV2 spacecraft, which will be weighed. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the mated SV1 and SV2 spacecraft are placed on a rotation stand for center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-20
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., an overhead crane with a scale is being moved to attach to the SV1-SV2 spacecraft, which will be weighed. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers remove covers around the mated SV1 and SV2 spacecraft before center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers remove covers around the mated SV1 and SV2 spacecraft before center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-22
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is lowered over the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft. It will be installed onto the lower segments already in place. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett
Introduction and Progress of APOSOS Project
NASA Astrophysics Data System (ADS)
Zhao, You; Gao, P. Q.; Shen, Ming; Chaudhry, Maqbool A.; Guo, Xiaozhong; Teng, D. P.; Yang, Datao; Yu, Huanhuan; Zhao, Zhe
Asia-Pacific Ground-Based Optical Satellite Observation System (APOSOS) project is based on members of Asia-Pacific Space Cooperation Organization (APSCO). Its aim is to develop a regional or even global satellite tracking network basically composed of optical trackers. The system will be used to track objects of interest or space-debris for the safety of spacecraft launch mission or the intactness of operational satellites. The system will benefit from the distribution of APSCO members and multi-national fund support or technical cooperation. Thus APOSOS will have a potential capability to observe all the satellites orbiting earth with high precision but relatively low cost. This paper will present the introduction, progress and current status of APOSOS project, including: System Requirements Definition, System Main Mission, System Goal, System design, Services and Clients, Organization Framework of Observation Center, Major Function of Observation Center, Establishment of Observation Plan, Format Standard for Exchanging Data, Data Policy, Implementation Schedule, etc.. APOSOS will build a unified surveillance network from observational facilities of member states involved, to utilize the wide geographical distribution advantage of multi-country. It will be operated under the coordination of APSCO observation mission management department. (1)APOSOS should conduct observation missions of specific satellites, space-debris or other space objects of interest, based on requirements of member states. APOSOS should fulfill the basic requirement for satellites observation and tracking missions. And it should also have the potential ability of small debris detection to support collision avoidance planning, which can protect the members high valued space assets. (2)In some particular application, APOSOS would be able to be used for long-term tracking of specific space object of interest, and have the ability of data processing and analysis, so as to provide conjunction assessment, collision probability calculation and avoidance planning for space assets. (3)APOSOS should have the capability of publishing information and sharing data among member states, with the ability to deal with user’s requests for data and mange the data in different levels. (4)APOSOS should have the capability of providing services such as technical consultation, training and science popularization.
Brockhoff, Alisa; Huff, Markus
2016-10-01
Multiple object tracking (MOT) plays a fundamental role in processing and interpreting dynamic environments. Regarding the type of information utilized by the observer, recent studies reported evidence for the use of object features in an automatic, low- level manner. By introducing a novel paradigm that allowed us to combine tracking with a noninterfering top-down task, we tested whether a voluntary component can regulate the deployment of attention to task-relevant features in a selective manner. In four experiments we found conclusive evidence for a task-driven selection mechanism that guides attention during tracking: The observers were able to ignore or prioritize distinct objects. They marked the distinct (cued) object (target/distractor) more or less often than other objects of the same type (targets /distractors)-but only when they had received an identification task that required them to actively process object features (cues) during tracking. These effects are discussed with regard to existing theoretical approaches to attentive tracking, gaze-cue usability as well as attentional readiness, a term that originally stems from research on attention capture and visual search. Our findings indicate that existing theories of MOT need to be adjusted to allow for flexible top-down, voluntary processing during tracking.
NASA Technical Reports Server (NTRS)
Elrod, B.; Kapoor, A.; Folta, David C.; Liu, K.
1991-01-01
Use of the Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) was proposed as an alternative to the Global Positioning System (GPS) for supporting the Earth Observing System (EOS) mission. The results are presented of EOS navigation performance evaluation with respect to TONS based orbit, time, and frequency determination (OD/TD/FD). Two TONS modes are considered: one uses scheduled TDRSS forward link service to derive one way Doppler tracking data for OD/FD support (TONS-I); the other uses an unscheduled navigation beacon service (proposed for Advanced TDRSS) to obtain pseudorange and Doppler data for OD/TD/FD support (TONS-II). Key objectives of the analysis were to evaluate nominal performance and potential sensitivities, such as suboptimal tracking geometry, tracking contact scheduling, and modeling parameter selection. OD/TD/FD performance predictions are presented based on covariance and simulation analyses. EOS navigation scenarios and the contributions of principal error sources impacting performance are also described. The results indicate that a TONS mode can be configured to meet current and proposed EOS position accuracy requirements of 100 and 50 m, respectively.
Yuzhe Ouyang; Shan, Kai; Bui, Francis Minhthang
2016-08-01
To understand the utilization of clinical resources and improve the efficiency of healthcare, it is often necessary to accurately locate patients and doctors in a healthcare facility. However, existing tracking methods, such as GPS, Wi-Fi and RFID, have technological drawbacks or impose significant costs, thus limiting their applications in many clinical environments, especially those with indoor enclosures. This paper proposes a low-cost and flexible tracking system that is well suited for operating in an indoor environment. Based on readily available RF transceivers and microcontrollers, our wearable sensor system can facilitate locating users (e.g., patients or doctors) or objects (e.g., medical devices) in a building. The strategic construction of the sensor system, along with a suitably designed tracking algorithm, together provide for reliability and dispatch in localization performance. For demonstration purposes, several simplified experiments, with different configurations of the system, are implemented in two testing rooms to assess the baseline performance. From the obtained results, our system exhibits immense promise in acquiring a user location and corresponding time-stamp, with high accuracy and rapid response. This capability is conducive to both short- and long-term data analytics, which are crucial for improving healthcare management.
The Deep Space Network. [tracking and communication functions and facilities
NASA Technical Reports Server (NTRS)
1974-01-01
The objectives, functions, and organization of the Deep Space Network are summarized. The Deep Space Instrumentation Facility, the Ground Communications Facility, and the Network Control System are described.
Nonstationary EO/IR Clutter Suppression and Dim Object Tracking
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Brown, A.; Brown, J.
2010-09-01
We develop and evaluate the performance of advanced algorithms which provide significantly improved capabilities for automated detection and tracking of ballistic and flying dim objects in the presence of highly structured intense clutter. Applications include ballistic missile early warning, midcourse tracking, trajectory prediction, and resident space object detection and tracking. The set of algorithms include, in particular, adaptive spatiotemporal clutter estimation-suppression and nonlinear filtering-based multiple-object track-before-detect. These algorithms are suitable for integration into geostationary, highly elliptical, or low earth orbit scanning or staring sensor suites, and are based on data-driven processing that adapts to real-world clutter backgrounds, including celestial, earth limb, or terrestrial clutter. In many scenarios of interest, e.g., for highly elliptic and, especially, low earth orbits, the resulting clutter is highly nonstationary, providing a significant challenge for clutter suppression to or below sensor noise levels, which is essential for dim object detection and tracking. We demonstrate the success of the developed algorithms using semi-synthetic and real data. In particular, our algorithms are shown to be capable of detecting and tracking point objects with signal-to-clutter levels down to 1/1000 and signal-to-noise levels down to 1/4.
NASA Astrophysics Data System (ADS)
Li, Chengcheng; Li, Yuefeng; Wang, Guanglin
2017-07-01
The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.
An open source framework for tracking and state estimation ('Stone Soup')
NASA Astrophysics Data System (ADS)
Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger
2017-05-01
The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,
Borrego, Adrián; Latorre, Jorge; Llorens, Roberto; Alcañiz, Mariano; Noé, Enrique
2016-08-09
Even though virtual reality (VR) is increasingly used in rehabilitation, the implementation of walking navigation in VR still poses a technological challenge for current motion tracking systems. Different metaphors simulate locomotion without involving real gait kinematics, which can affect presence, orientation, spatial memory and cognition, and even performance. All these factors can dissuade their use in rehabilitation. We hypothesize that a marker-based head tracking solution would allow walking in VR with high sense of presence and without causing sickness. The objectives of this study were to determine the accuracy, the jitter, and the lag of the tracking system and its elicited sickness and presence in comparison of a CAVE system. The accuracy and the jitter around the working area at three different heights and the lag of the head tracking system were analyzed. In addition, 47 healthy subjects completed a search task that involved navigation in the walking VR system and in the CAVE system. Navigation was enabled by natural locomotion in the walking VR system and through a specific device in the CAVE system. An HMD was used as display in the walking VR system. After interacting with each system, subjects rated their sickness in a seven-point scale and their presence in the Slater-Usoh-Steed Questionnaire and a modified version of the Presence Questionnaire. Better performance was registered at higher heights, where accuracy was less than 0.6 cm and the jitter was about 6 mm. The lag of the system was 120 ms. Participants reported that both systems caused similar low levels of sickness (about 2.4 over 7). However, ratings showed that the walking VR system elicited higher sense of presence than the CAVE system in both the Slater-Usoh-Steed Questionnaire (17.6 ± 0.3 vs 14.6 ± 0.6 over 21, respectively) and the modified Presence Questionnaire (107.4 ± 2.0 vs 93.5 ± 3.2 over 147, respectively). The marker-based solution provided accurate, robust, and fast head tracking to allow navigation in the VR system by walking without causing relevant sickness and promoting higher sense of presence than CAVE systems, thus enabling natural walking in full-scale environments, which can enhance the ecological validity of VR-based rehabilitation applications.
Physician tracking in sub-Saharan Africa: current initiatives and opportunities
2014-01-01
Background Physician tracking systems are critical for health workforce planning as well as for activities to ensure quality health care - such as physician regulation, education, and emergency response. However, information on current systems for physician tracking in sub-Saharan Africa is limited. The objective of this study is to provide information on the current state of physician tracking systems in the region, highlighting emerging themes and innovative practices. Methods This study included a review of the literature, an online search for physician licensing systems, and a document review of publicly available physician registration forms for sub-Saharan African countries. Primary data on physician tracking activities was collected as part of the Medical Education Partnership Initiative (MEPI) - through two rounds over two years of annual surveys to 13 medical schools in 12 sub-Saharan countries. Two innovations were identified during two MEPI school site visits in Uganda and Ghana. Results Out of twelve countries, nine had existing frameworks for physician tracking through licensing requirements. Most countries collected basic demographic information: name, address, date of birth, nationality/citizenship, and training institution. Practice information was less frequently collected. The most frequently collected practice fields were specialty/degree and current title/position. Location of employment and name and sector of current employer were less frequently collected. Many medical schools are taking steps to implement graduate tracking systems. We also highlight two innovative practices: mobile technology access to physician registries in Uganda and MDNet, a public-private partnership providing free mobile-to-mobile voice and text messages to all doctors registered with the Ghana Medical Association. Conclusion While physician tracking systems vary widely between countries and a number of challenges remain, there appears to be increasing interest in developing these systems and many innovative developments in the area. Opportunities exist to expand these systems in a more coordinated manner that will ultimately lead to better workforce planning, implementation of the workforce, and better health. PMID:24754965
Physician tracking in sub-Saharan Africa: current initiatives and opportunities.
Chen, Candice; Baird, Sarah; Ssentongo, Katumba; Mehtsun, Sinit; Olapade-Olaopa, Emiola Oluwabunmi; Scott, Jim; Sewankambo, Nelson; Talib, Zohray; Ward-Peterson, Melissa; Mariam, Damen Haile; Rugarabamu, Paschalis
2014-04-23
Physician tracking systems are critical for health workforce planning as well as for activities to ensure quality health care - such as physician regulation, education, and emergency response. However, information on current systems for physician tracking in sub-Saharan Africa is limited. The objective of this study is to provide information on the current state of physician tracking systems in the region, highlighting emerging themes and innovative practices. This study included a review of the literature, an online search for physician licensing systems, and a document review of publicly available physician registration forms for sub-Saharan African countries. Primary data on physician tracking activities was collected as part of the Medical Education Partnership Initiative (MEPI) - through two rounds over two years of annual surveys to 13 medical schools in 12 sub-Saharan countries. Two innovations were identified during two MEPI school site visits in Uganda and Ghana. Out of twelve countries, nine had existing frameworks for physician tracking through licensing requirements. Most countries collected basic demographic information: name, address, date of birth, nationality/citizenship, and training institution. Practice information was less frequently collected. The most frequently collected practice fields were specialty/degree and current title/position. Location of employment and name and sector of current employer were less frequently collected. Many medical schools are taking steps to implement graduate tracking systems. We also highlight two innovative practices: mobile technology access to physician registries in Uganda and MDNet, a public-private partnership providing free mobile-to-mobile voice and text messages to all doctors registered with the Ghana Medical Association. While physician tracking systems vary widely between countries and a number of challenges remain, there appears to be increasing interest in developing these systems and many innovative developments in the area. Opportunities exist to expand these systems in a more coordinated manner that will ultimately lead to better workforce planning, implementation of the workforce, and better health.
DOT National Transportation Integrated Search
2003-11-14
Transit Tracker uses global positioning system (GPS) technology to track how far a bus is along its scheduled route. This document presents the evaluation strategies and objectives, the data collection methodologies, and the results of the evaluation...
A visual tracking method based on deep learning without online model updating
NASA Astrophysics Data System (ADS)
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
RL-34 ring laser gyro laboratory evaluation for the Deep Space Network antenna application
NASA Technical Reports Server (NTRS)
1991-01-01
The overall results of this laboratory evaluation are quite encouraging. The gyro data is in good agreement with the system's overall pointing performance, which is quite close to the technical objectives for the Deep Space Network (DSN) application. The system can be calibrated to the levels required for millidegree levels of pointing performance, and initialization performance is within the required 0.001 degree objective. The blind target acquisition performance is within a factor of two of the 0.0001 degree objective, limited only by a combination of the slow rate (0.5 deg/sec) and the existing production quantization logic (0.38 arc-sec/pulse). Logic circuitry exists to better this performance such that it will better the objective by 50 percent. Representative data with this circuitry has been provided for illustration. Target tracking performance is about twice the one millidegree objective, with several factors contributing. The first factor is the bias stability of the gyros, which is exceptional, but will limit performance to the 0.001 and 0.002 degree range for long tracking periods. The second contributing factor is the accelerometer contributions when the system is elevated. These degrade performance into the 0.003 to 0.004 degree range, which could be improved upon with some additional changes. Finally, we have provided a set of recommendations to improve performance closer to the technical objectives. These recommendations include gyro, electronics, and system configurational changes that form the basis for additional work to achieve the desired performance. In conclusion, we believe that the RL-34 ring laser gyro-based advanced navigation system demonstrated performance consistent with expectations and technical objectives, and it has the potential for even further enhancement for the DSN application.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2011-02-01
The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.
SAHRIS: using the South African Heritage Register to report, track and monitor heritage crime
NASA Astrophysics Data System (ADS)
Smuts, K.
2015-08-01
South Africa has experienced a recent increase in thefts of heritage objects from museums and galleries around the country. While the exact number of incidences is not known, the increase in thefts is nonetheless apparent, and has revealed the weaknesses of the systems currently in place to respond to these crimes. The South African Heritage Resources Information System (SAHRIS) is an integrated, online heritage resources management tool developed by the South African Heritage Resources Agency (SAHRA) in 2011 in terms of Section 39 of the National Heritage Resources Act (NHRA), No. 25 of 1999. The system's combined heritage resources and site and object management functionality has been expanded to provide an integrated, responsive tool for reporting heritage crimes and tracking the progress of the resultant cases. This paper reviews existing legislative frameworks and crime reporting and monitoring systems relevant to fighting heritage crime, and identifies current gaps in those responses. SAHRIS is presented as an innovative tool to combat heritage crime effectively in the South African context by offering a centralised, consolidated platform that provides the various stakeholders involved in reporting heritage crimes and locating and retrieving stolen objects with a means to coordinate their responses to such instances.
Català, Andreu; Rodríguez Martín, Daniel; van der Aa, Nico; Chen, Wei; Rauterberg, Matthias
2013-01-01
Background Freezing of gait (FoG) is one of the most disturbing and least understood symptoms in Parkinson disease (PD). Although the majority of existing assistive systems assume accurate detections of FoG episodes, the detection itself is still an open problem. The specificity of FoG is its dependency on the context of a patient, such as the current location or activity. Knowing the patient's context might improve FoG detection. One of the main technical challenges that needs to be solved in order to start using contextual information for FoG detection is accurate estimation of the patient's position and orientation toward key elements of his or her indoor environment. Objective The objectives of this paper are to (1) present the concept of the monitoring system, based on wearable and ambient sensors, which is designed to detect FoG using the spatial context of the user, (2) establish a set of requirements for the application of position and orientation tracking in FoG detection, (3) evaluate the accuracy of the position estimation for the tracking system, and (4) evaluate two different methods for human orientation estimation. Methods We developed a prototype system to localize humans and track their orientation, as an important prerequisite for a context-based FoG monitoring system. To setup the system for experiments with real PD patients, the accuracy of the position and orientation tracking was assessed under laboratory conditions in 12 participants. To collect the data, the participants were asked to wear a smartphone, with and without known orientation around the waist, while walking over a predefined path in the marked area captured by two Kinect cameras with non-overlapping fields of view. Results We used the root mean square error (RMSE) as the main performance measure. The vision based position tracking algorithm achieved RMSE = 0.16 m in position estimation for upright standing people. The experimental results for the proposed human orientation estimation methods demonstrated the adaptivity and robustness to changes in the smartphone attachment position, when the fusion of both vision and inertial information was used. Conclusions The system achieves satisfactory accuracy on indoor position tracking for the use in the FoG detection application with spatial context. The combination of inertial and vision information has the potential for correct patient heading estimation even when the inertial wearable sensor device is put into an a priori unknown position. PMID:25098265
Approach for counting vehicles in congested traffic flow
NASA Astrophysics Data System (ADS)
Tan, Xiaojun; Li, Jun; Liu, Wei
2005-02-01
More and more image sensors are used in intelligent transportation systems. In practice, occlusion is always a problem when counting vehicles in congested traffic. This paper tries to present an approach to solve the problem. The proposed approach consists of three main procedures. Firstly, a new algorithm of background subtraction is performed. The aim is to segment moving objects from an illumination-variant background. Secondly, object tracking is performed, where the CONDENSATION algorithm is used. This can avoid the problem of matching vehicles in successive frames. Thirdly, an inspecting procedure is executed to count the vehicles. When a bus firstly occludes a car and then the bus moves away a few frames later, the car will appear in the scene. The inspecting procedure should find the "new" car and add it as a tracking object.
Image-based systems for space surveillance: from images to collision avoidance
NASA Astrophysics Data System (ADS)
Pyanet, Marine; Martin, Bernard; Fau, Nicolas; Vial, Sophie; Chalte, Chantal; Beraud, Pascal; Fuss, Philippe; Le Goff, Roland
2011-11-01
In many spatial systems, image is a core technology to fulfil the mission requirements. Depending on the application, the needs and the constraints are different and imaging systems can offer a large variety of configurations in terms of wavelength, resolution, field-of-view, focal length or sensitivity. Adequate image processing algorithms allow the extraction of the needed information and the interpretation of images. As a prime contractor for many major civil or military projects, Astrium ST is very involved in the proposition, development and realization of new image-based techniques and systems for space-related purposes. Among the different applications, space surveillance is a major stake for the future of space transportation. Indeed, studies show that the number of debris in orbit is exponentially growing and the already existing population of small and medium debris is a concrete threat to operational satellites. This paper presents Astrium ST activities regarding space surveillance for space situational awareness (SSA) and space traffic management (STM). Among other possible SSA architectures, the relevance of a ground-based optical station network is investigated. The objective is to detect and track space debris and maintain an exhaustive and accurate catalogue up-to-date in order to assess collision risk for satellites and space vehicles. The system is composed of different type of optical stations dedicated to specific functions (survey, passive tracking, active tracking), distributed around the globe. To support these investigations, two in-house operational breadboards were implemented and are operated for survey and tracking purposes. This paper focuses on Astrium ST end-to-end optical-based survey concept. For the detection of new debris, a network of wide field of view survey stations is considered: those stations are able to detect small objects and associated image processing (detection and tracking) allow a preliminary restitution of their orbit.
Alnæs, Dag; Sneve, Markus Handal; Espeseth, Thomas; Endestad, Tor; van de Pavert, Steven Harry Pieter; Laeng, Bruno
2014-04-01
Attentional effort relates to the allocation of limited-capacity attentional resources to meet current task demands and involves the activation of top-down attentional systems in the brain. Pupillometry is a sensitive measure of this intensity aspect of top-down attentional control. Studies relate pupillary changes in response to cognitive processing to activity in the locus coeruleus (LC), which is the main hub of the brain's noradrenergic system and it is thought to modulate the operations of the brain's attentional systems. In the present study, participants performed a visual divided attention task known as multiple object tracking (MOT) while their pupil sizes were recorded by use of an infrared eye tracker and then were tested again with the same paradigm while brain activity was recorded using fMRI. We hypothesized that the individual pupil dilations, as an index of individual differences in mental effort, as originally proposed by Kahneman (1973), would be a better predictor of LC activity than the number of tracked objects during MOT. The current results support our hypothesis, since we observed pupil-related activity in the LC. Moreover, the changes in the pupil correlated with activity in the superior colliculus and the right thalamus, as well as cortical activity in the dorsal attention network, which previous studies have shown to be strongly activated during visual tracking of multiple targets. Follow-up pupillometric analyses of the MOT task in the same individuals also revealed that individual differences to cognitive load can be remarkably stable over a lag of several years. To our knowledge this is the first study using pupil dilations as an index of attentional effort in the MOT task and also relating these to functional changes in the brain that directly implicate the LC-NE system in the allocation of processing resources.
The research on the mean shift algorithm for target tracking
NASA Astrophysics Data System (ADS)
CAO, Honghong
2017-06-01
The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.
Ego-Motion and Tracking for Continuous Object Learning: A Brief Survey
2017-09-01
ARL-TR-8167• SEP 2017 US Army Research Laboratory Ego-motion and Tracking for ContinuousObject Learning: A Brief Survey by Jason Owens and Philip...SEP 2017 US Army Research Laboratory Ego-motion and Tracking for ContinuousObject Learning: A Brief Survey by Jason Owens and Philip OsteenVehicle...
Locator-Checker-Scaler Object Tracking Using Spatially Ordered and Weighted Patch Descriptor.
Kim, Han-Ul; Kim, Chang-Su
2017-08-01
In this paper, we propose a simple yet effective object descriptor and a novel tracking algorithm to track a target object accurately. For the object description, we divide the bounding box of a target object into multiple patches and describe them with color and gradient histograms. Then, we determine the foreground weight of each patch to alleviate the impacts of background information in the bounding box. To this end, we perform random walk with restart (RWR) simulation. We then concatenate the weighted patch descriptors to yield the spatially ordered and weighted patch (SOWP) descriptor. For the object tracking, we incorporate the proposed SOWP descriptor into a novel tracking algorithm, which has three components: locator, checker, and scaler (LCS). The locator and the scaler estimate the center location and the size of a target, respectively. The checker determines whether it is safe to adjust the target scale in a current frame. These three components cooperate with one another to achieve robust tracking. Experimental results demonstrate that the proposed LCS tracker achieves excellent performance on recent benchmarks.
Object acquisition and tracking for space-based surveillance
NASA Astrophysics Data System (ADS)
1991-11-01
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-27
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less
3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging
NASA Astrophysics Data System (ADS)
Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak
2017-10-01
Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.
Laser Calibration Experiment for Small Objects in Space
NASA Technical Reports Server (NTRS)
Campbell, Jonathan; Ayers, K.; Carreras, R.; Carruth, R.; Freestone, T.; Sharp, J.; Rawleigh, A.; Brewer, J.; Schrock, K.; Bell, L.;
2001-01-01
The Air Force Research Laboratory/Directed Energy Directorate (AFRL/DE) and NASA/Marshall Space Flight Center (MSFC) are looking at a series of joint laser space calibration experiments using the 12J 15Hz CO2 High Performance CO2 Ladar Surveillance Sensor (FU-CLASS) system on the 3.67 meter aperture Advanced Electro-Optics System (AEOS). The objectives of these experiments are to provide accurate range and signature measurements of calibration spheres, demonstrate high resolution tracking capability of small objects, and support NASA in technology development and tracking projects. Ancillary benefits include calibrating radar and optical sites, completing satellite conjunction analyses, supporting orbital perturbations analyses, and comparing radar and optical signatures. In the first experiment, a Global Positioning System (GPS)/laser beacon instrumented microsatellite about 25 cm in diameter will be deployed from a Space Shuttle Hitchhiker canister or other suitable launch means. Orbiting in low earth orbit, the microsatellite will pass over AEOS on the average of two times per 24-hour period. An onboard orbit propagator will activate the GPS unit and a visible laser beacon at the appropriate times. The HI-CLASS/AEOS system will detect the microsatellite as it rises above the horizon, using GPS-generated acquisition vectors. The visible laser beacon will be used to fine-tune the tracking parameters for continuous ladar data measurements throughout the pass. This operational approach should maximize visibility to the ground-based laser while allowing battery life to be conserved, thus extending the lifetime of the satellite. GPS data will be transmitted to the ground providing independent location information for the microsatellite down to sub-meter accuracies.
Schultz, Elise V; Schultz, Christopher J; Carey, Lawrence D; Cecil, Daniel J; Bateman, Monte
2016-01-01
This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.
NASA Technical Reports Server (NTRS)
Schultz, Elise; Schultz, Christopher Joseph; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte
2016-01-01
This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.
Attention Modulates Spatial Precision in Multiple-Object Tracking.
Srivastava, Nisheeth; Vul, Ed
2016-01-01
We present a computational model of multiple-object tracking that makes trial-level predictions about the allocation of visual attention and the effect of this allocation on observers' ability to track multiple objects simultaneously. This model follows the intuition that increased attention to a location increases the spatial resolution of its internal representation. Using a combination of empirical and computational experiments, we demonstrate the existence of a tight coupling between cognitive and perceptual resources in this task: Low-level tracking of objects generates bottom-up predictions of error likelihood, and high-level attention allocation selectively reduces error probabilities in attended locations while increasing it at non-attended locations. Whereas earlier models of multiple-object tracking have predicted the big picture relationship between stimulus complexity and response accuracy, our approach makes accurate predictions of both the macro-scale effect of target number and velocity on tracking difficulty and micro-scale variations in difficulty across individual trials and targets arising from the idiosyncratic within-trial interactions of targets and distractors. Copyright © 2016 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Krauzlis, R. J.; Stone, L. S.
1999-01-01
The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.
Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data
NASA Astrophysics Data System (ADS)
Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas
2016-06-01
Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.
Enhanced online convolutional neural networks for object tracking
NASA Astrophysics Data System (ADS)
Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen
2018-04-01
In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.
2013-09-01
We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.
NASA Technical Reports Server (NTRS)
1986-01-01
The objective of the Workshop was to focus on the key technology area for 21st century spacecraft and the programs needed to facilitate technology development and validation. Topics addressed include: spacecraft systems; system development; structures and materials; thermal control; electrical power; telemetry, tracking, and control; data management; propulsion; and attitude control.
Adaptive particle filter for robust visual tracking
NASA Astrophysics Data System (ADS)
Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai
2009-10-01
Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.
Store-and-feedforward adaptive gaming system for hand-finger motion tracking in telerehabilitation.
Lockery, Daniel; Peters, James F; Ramanna, Sheela; Shay, Barbara L; Szturm, Tony
2011-05-01
This paper presents a telerehabilitation system that encompasses a webcam and store-and-feedforward adaptive gaming system for tracking finger-hand movement of patients during local and remote therapy sessions. Gaming-event signals and webcam images are recorded as part of a gaming session and then forwarded to an online healthcare content management system (CMS) that separates incoming information into individual patient records. The CMS makes it possible for clinicians to log in remotely and review gathered data using online reports that are provided to help with signal and image analysis using various numerical measures and plotting functions. Signals from a 6 degree-of-freedom magnetic motion tracking system provide a basis for video-game sprite control. The MMT provides a path for motion signals between common objects manipulated by a patient and a computer game. During a therapy session, a webcam that captures images of the hand together with a number of performance metrics provides insight into the quality, efficiency, and skill of a patient.
2007-09-30
the behavioral ecology of marine mammals by simultaneously tracking multiple vocalizing individuals in space and time. OBJECTIVES The ...goal is to contribute to the behavioral ecology of marine mammals by simultaneously tracking multiple vocalizing individuals in space and time. 15...OA Graduate Traineeship for E-M Nosal) LONG-TERM GOALS The goal of our research is to develop systems that use a widely spaced hydrophone array
Infants Use Different Mechanisms to Make Small and Large Number Ordinal Judgments
ERIC Educational Resources Information Center
vanMarle, Kristy
2013-01-01
Previous research has shown indirectly that infants may use two different mechanisms-an object tracking system and an analog magnitude mechanism--to represent small (less than 4) and large (greater than or equal to 4) numbers of objects, respectively. The current study directly tested this hypothesis in an ordinal choice task by presenting 10- to…
An experimental comparison of online object-tracking algorithms
NASA Astrophysics Data System (ADS)
Wang, Qing; Chen, Feng; Xu, Wenli; Yang, Ming-Hsuan
2011-09-01
This paper reviews and evaluates several state-of-the-art online object tracking algorithms. Notwithstanding decades of efforts, object tracking remains a challenging problem due to factors such as illumination, pose, scale, deformation, motion blur, noise, and occlusion. To account for appearance change, most recent tracking algorithms focus on robust object representations and effective state prediction. In this paper, we analyze the components of each tracking method and identify their key roles in dealing with specific challenges, thereby shedding light on how to choose and design algorithms for different situations. We compare state-of-the-art online tracking methods including the IVT,1 VRT,2 FragT,3 BoostT,4 SemiT,5 BeSemiT,6 L1T,7 MILT,8 VTD9 and TLD10 algorithms on numerous challenging sequences, and evaluate them with different performance metrics. The qualitative and quantitative comparative results demonstrate the strength and weakness of these algorithms.
Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip
2015-07-01
Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological datamore » can be incorporated by means of data fusion of the two sensors' output data. (authors)« less
NASA Astrophysics Data System (ADS)
Choi, J.; Jo, J.
2016-09-01
The optical satellite tracking data obtained by the first Korean optical satellite tracking system, Optical Wide-field patrol - Network (OWL-Net), had been examined for precision orbit determination. During the test observation at Israel site, we have successfully observed a satellite with Laser Retro Reflector (LRR) to calibrate the angle-only metric data. The OWL observation system is using a chopper equipment to get dense observation data in one-shot over 100 points for the low Earth orbit objects. After several corrections, orbit determination process was done with validated metric data. The TLE with the same epoch of the end of the first arc was used for the initial orbital parameter. Orbit Determination Tool Kit (ODTK) was used for an analysis of a performance of orbit estimation using the angle-only measurements. We have been developing batch style orbit estimator.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
NASA Astrophysics Data System (ADS)
Bennett, J.; Gehly, S.
2016-09-01
This paper presents results from a preliminary method for extracting more orbital information from low rate passive optical tracking data. An improvement in the accuracy of the observation data yields more accurate and reliable orbital elements. A comparison between the orbit propagations from the orbital element generated using the new data processing method is compared with the one generated from the raw observation data for several objects. Optical tracking data collected by EOS Space Systems, located on Mount Stromlo, Australia, is fitted to provide a new orbital element. The element accuracy is determined from a comparison between the predicted orbit and subsequent tracking data or reference orbit if available. The new method is shown to result in a better orbit prediction which has important implications in conjunction assessments and the Space Environment Research Centre space object catalogue. The focus is on obtaining reliable orbital solutions from sparse data. This work forms part of the collaborative effort of the Space Environment Management Cooperative Research Centre which is developing new technologies and strategies to preserve the space environment (www.serc.org.au).
Multiple Objects Fusion Tracker Using a Matching Network for Adaptively Represented Instance Pairs
Oh, Sang-Il; Kang, Hang-Bong
2017-01-01
Multiple-object tracking is affected by various sources of distortion, such as occlusion, illumination variations and motion changes. Overcoming these distortions by tracking on RGB frames, such as shifting, has limitations because of material distortions caused by RGB frames. To overcome these distortions, we propose a multiple-object fusion tracker (MOFT), which uses a combination of 3D point clouds and corresponding RGB frames. The MOFT uses a matching function initialized on large-scale external sequences to determine which candidates in the current frame match with the target object in the previous frame. After conducting tracking on a few frames, the initialized matching function is fine-tuned according to the appearance models of target objects. The fine-tuning process of the matching function is constructed as a structured form with diverse matching function branches. In general multiple object tracking situations, scale variations for a scene occur depending on the distance between the target objects and the sensors. If the target objects in various scales are equally represented with the same strategy, information losses will occur for any representation of the target objects. In this paper, the output map of the convolutional layer obtained from a pre-trained convolutional neural network is used to adaptively represent instances without information loss. In addition, MOFT fuses the tracking results obtained from each modality at the decision level to compensate the tracking failures of each modality using basic belief assignment, rather than fusing modalities by selectively using the features of each modality. Experimental results indicate that the proposed tracker provides state-of-the-art performance considering multiple objects tracking (MOT) and KITTIbenchmarks. PMID:28420194
Pose tracking for augmented reality applications in outdoor archaeological sites
NASA Astrophysics Data System (ADS)
Younes, Georges; Asmar, Daniel; Elhajj, Imad; Al-Harithy, Howayda
2017-01-01
In recent years, agencies around the world have invested huge amounts of effort toward digitizing many aspects of the world's cultural heritage. Of particular importance is the digitization of outdoor archaeological sites. In the spirit of valorization of this digital information, many groups have developed virtual or augmented reality (AR) computer applications themed around a particular archaeological object. The problem of pose tracking in outdoor AR applications is addressed. Different positional systems are analyzed, resulting in the selection of a monocular camera-based user tracker. The limitations that challenge this technique from map generation, scale, anchoring, to lighting conditions are analyzed and systematically addressed. Finally, as a case study, our pose tracking system is implemented within an AR experience in the Byblos Roman theater in Lebanon.
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the top of the mated SV1 and SV2 remains covered. The spacecraft are being prepared for center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-03
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., a crane is attached to the SV1 spacecraft, part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The SV1 will be lifted and moved to mate with the SV2 on another stand nearby. STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. The spacecraft is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the mated SV1 and SV2 spacecraft retain the covers on the top which are being removed before center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
2009-08-19
CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., this closeup shows part of the mated SV1 and SV2 spacecraft, which is being prepared for center of gravity testing, weighing and balancing. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann
Research and design of portable photoelectric rotary table data-acquisition and analysis system
NASA Astrophysics Data System (ADS)
Yang, Dawei; Yang, Xiufang; Han, Junfeng; Yan, Xiaoxu
2015-02-01
Photoelectric rotary table as the main test tracking measurement platform, widely use in shooting range and aerospace fields. In the range of photoelectric tracking measurement system, in order to meet the photoelectric testing instruments and equipment of laboratory and field application demand, research and design the portable photoelectric rotary table data acquisition and analysis system, and introduces the FPGA device based on Xilinx company Virtex-4 series and its peripheral module of the system hardware design, and the software design of host computer in VC++ 6.0 programming platform and MFC package based on class libraries. The data acquisition and analysis system for data acquisition, display and storage, commission control, analysis, laboratory wave playback, transmission and fault diagnosis, and other functions into an organic whole, has the advantages of small volume, can be embedded, high speed, portable, simple operation, etc. By photoelectric tracking turntable as experimental object, carries on the system software and hardware alignment, the experimental results show that the system can realize the data acquisition, analysis and processing of photoelectric tracking equipment and control of turntable debugging good, and measurement results are accurate, reliable and good maintainability and extensibility. The research design for advancing the photoelectric tracking measurement equipment debugging for diagnosis and condition monitoring and fault analysis as well as the standardization and normalization of the interface and improve the maintainability of equipment is of great significance, and has certain innovative and practical value.
Kernelized correlation tracking with long-term motion cues
NASA Astrophysics Data System (ADS)
Lv, Yunqiu; Liu, Kai; Cheng, Fei
2018-04-01
Robust object tracking is a challenging task in computer vision due to interruptions such as deformation, fast motion and especially, occlusion of tracked object. When occlusions occur, image data will be unreliable and is insufficient for the tracker to depict the object of interest. Therefore, most trackers are prone to fail under occlusion. In this paper, an occlusion judgement and handling method based on segmentation of the target is proposed. If the target is occluded, the speed and direction of it must be different from the objects occluding it. Hence, the value of motion features are emphasized. Considering the efficiency and robustness of Kernelized Correlation Filter Tracking (KCF), it is adopted as a pre-tracker to obtain a predicted position of the target. By analyzing long-term motion cues of objects around this position, the tracked object is labelled. Hence, occlusion could be detected easily. Experimental results suggest that our tracker achieves a favorable performance and effectively handles occlusion and drifting problems.
Grouping and trajectory storage in multiple object tracking: impairments due to common item motions.
Suganuma, Mutsumi; Yokosawa, Kazuhiko
2006-01-01
In our natural viewing, we notice that objects change their locations across space and time. However, there has been relatively little consideration of the role of motion information in the construction and maintenance of object representations. We investigated this question in the context of the multiple object tracking (MOT) paradigm, wherein observers must keep track of target objects as they move randomly amid featurally identical distractors. In three experiments, we observed impairments in tracking ability when the motions of the target and distractor items shared particular properties. Specifically, we observed impairments when the target and distractor items were in a chasing relationship or moved in a uniform direction. Surprisingly, tracking ability was impaired by these manipulations even when observers failed to notice them. Our results suggest that differentiable trajectory information is an important factor in successful performance of MOT tasks. More generally, these results suggest that various types of common motion can serve as cues to form more global object representations even in the absence of other grouping cues.
Optical track width measurements below 100 nm using artificial neural networks
NASA Astrophysics Data System (ADS)
Smith, R. J.; See, C. W.; Somekh, M. G.; Yacoot, A.; Choi, E.
2005-12-01
This paper discusses the feasibility of using artificial neural networks (ANNs), together with a high precision scanning optical profiler, to measure very fine track widths that are considerably below the conventional diffraction limit of a conventional optical microscope. The ANN is trained using optical profiles obtained from tracks of known widths, the network is then assessed by applying it to test profiles. The optical profiler is an ultra-stable common path scanning interferometer, which provides extremely precise surface measurements. Preliminary results, obtained with a 0.3 NA objective lens and a laser wavelength of 633 nm, show that the system is capable of measuring a 50 nm track width, with a standard deviation less than 4 nm.
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
A Video Game Platform for Exploring Satellite and In-Situ Data Streams
NASA Astrophysics Data System (ADS)
Cai, Y.
2014-12-01
Exploring spatiotemporal patterns of moving objects are essential to Earth Observation missions, such as tracking, modeling and predicting movement of clouds, dust, plumes and harmful algal blooms. Those missions involve high-volume, multi-source, and multi-modal imagery data analysis. Analytical models intend to reveal inner structure, dynamics, and relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive but limited by manual operations, such as area marking, measurement and alignment of multi-source data, which are expensive and time-consuming. A new development of video analytics platform has been in progress, which integrates the video game engine with satellite and in-situ data streams. The system converts Earth Observation data into articulated objects that are mapped from a high-dimensional space to a 3D space. The object tracking and augmented reality algorithms highlight the objects' features in colors, shapes and trajectories, creating visual cues for observing dynamic patterns. The head and gesture tracker enable users to navigate the data space interactively. To validate our design, we have used NASA SeaWiFS satellite images of oceanographic remote sensing data and NOAA's in-situ cell count data. Our study demonstrates that the video game system can reduce the size and cost of traditional CAVE systems in two to three orders of magnitude. This system can also be used for satellite mission planning and public outreaching.
NASA Technical Reports Server (NTRS)
Goodwin, P. S.; Traxler, M. R.; Meeks, W. G.; Flanagan, F. M.
1976-01-01
The overall evolution of the Helios Project is summarized from its conception through to the completion of the Helios-1 mission phase 2. Beginning with the project objectives and concluding with the Helios-1 spacecraft entering its first superior conjunction (end of mission phase 2), descriptions of the project, the mission and its phases, international management and interfaces, and Deep Space Network-spacecraft engineering development in telemetry, tracking, and command systems to ensure compatibility between the U.S. Deep Space Network and the German-built spacecraft are included.
Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor
Tanno, Koichi
2017-01-01
A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG) signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor. PMID:28912800
Nonlinear Motion Tracking by Deep Learning Architecture
NASA Astrophysics Data System (ADS)
Verma, Arnav; Samaiya, Devesh; Gupta, Karunesh K.
2018-03-01
In the world of Artificial Intelligence, object motion tracking is one of the major problems. The extensive research is being carried out to track people in crowd. This paper presents a unique technique for nonlinear motion tracking in the absence of prior knowledge of nature of nonlinear path that the object being tracked may follow. We achieve this by first obtaining the centroid of the object and then using the centroid as the current example for a recurrent neural network trained using real-time recurrent learning. We have tweaked the standard algorithm slightly and have accumulated the gradient for few previous iterations instead of using just the current iteration as is the norm. We show that for a single object, such a recurrent neural network is highly capable of approximating the nonlinearity of its path.
ERIC Educational Resources Information Center
O'Hearn, Kirsten; Hoffman, James E.; Landau, Barbara
2010-01-01
The ability to track moving objects, a crucial skill for mature performance on everyday spatial tasks, has been hypothesized to require a specialized mechanism that may be available in infancy (i.e. indexes). Consistent with the idea of specialization, our previous work showed that object tracking was more impaired than a matched spatial memory…
Assessing Multiple Object Tracking in Young Children Using a Game
ERIC Educational Resources Information Center
Ryokai, Kimiko; Farzin, Faraz; Kaltman, Eric; Niemeyer, Greg
2013-01-01
Visual tracking of multiple objects in a complex scene is a critical survival skill. When we attempt to safely cross a busy street, follow a ball's position during a sporting event, or monitor children in a busy playground, we rely on our brain's capacity to selectively attend to and track the position of specific objects in a dynamic scene. This…
Zhao, Ximei; Ren, Chengyi; Liu, Hao; Li, Haogyi
2014-12-01
Robotic catheter minimally invasive operation requires that the driver control system has the advantages of quick response, strong anti-jamming and real-time tracking of target trajectory. Since the catheter parameters of itself and movement environment and other factors continuously change, when the driver is controlled using traditional proportional-integral-derivative (PID), the controller gain becomes fixed once the PID parameters are set. It can not change with the change of the parameters of the object and environmental disturbance so that its change affects the position tracking accuracy, and may bring a large overshoot endangering patients' vessel. Therefore, this paper adopts fuzzy PID control method to adjust PID gain parameters in the tracking process in order to improve the system anti-interference ability, dynamic performance and tracking accuracy. The simulation results showed that the fuzzy PID control method had a fast tracking performance and a strong robustness. Compared with those of traditional PID control, the feasibility and practicability of fuzzy PID control are verified in a robotic catheter minimally invasive operation.
A high-speed tracking algorithm for dense granular media
NASA Astrophysics Data System (ADS)
Cerda, Mauricio; Navarro, Cristóbal A.; Silva, Juan; Waitukaitis, Scott R.; Mujica, Nicolás; Hitschfeld, Nancy
2018-06-01
Many fields of study, including medical imaging, granular physics, colloidal physics, and active matter, require the precise identification and tracking of particle-like objects in images. While many algorithms exist to track particles in diffuse conditions, these often perform poorly when particles are densely packed together-as in, for example, solid-like systems of granular materials. Incorrect particle identification can have significant effects on the calculation of physical quantities, which makes the development of more precise and faster tracking algorithms a worthwhile endeavor. In this work, we present a new tracking algorithm to identify particles in dense systems that is both highly accurate and fast. We demonstrate the efficacy of our approach by analyzing images of dense, solid-state granular media, where we achieve an identification error of 5% in the worst evaluated cases. Going further, we propose a parallelization strategy for our algorithm using a GPU, which results in a speedup of up to 10 × when compared to a sequential CPU implementation in C and up to 40 × when compared to the reference MATLAB library widely used for particle tracking. Our results extend the capabilities of state-of-the-art particle tracking methods by allowing fast, high-fidelity detection in dense media at high resolutions.
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness. PMID:28296902
Vocabulary and Experiences to Develop a Center of Mass Model
ERIC Educational Resources Information Center
Kaar, Taylor; Pollack, Linda B.; Lerner, Michael E.; Engels, Robert J.
2017-01-01
The use of systems in many introductory courses is limited and often implicit. Modeling two or more objects as a system and tracking the center of mass of that system is usually not included. Thinking in terms of the center of mass facilitates problem solving while exposing the importance of using conservation laws. We present below three…
A composite controller for trajectory tracking applied to the Furuta pendulum.
Aguilar-Avelar, Carlos; Moreno-Valenzuela, Javier
2015-07-01
In this paper, a new composite scheme is proposed, where the total control action is composed of the sum of a feedback-linearization-based controller and an energy-based compensation. This new proposition is applied to the rotary inverted pendulum or Furuta pendulum. The Furuta pendulum is a well-known underactuated mechanical system with two degrees of freedom. The control objective in this case is the tracking of a desired periodic trajectory in the actuated joint, while the unactuated link is regulated at the upward position. The closed-loop system is analyzed showing uniformly ultimately boundedness of the error trajectories. The design procedure is shown in a constructive form, such that it may be applied to other underactuated mechanical systems, with the proper definitions of the output function and the energy function. Numerical simulations and real-time experiments show the practical viability of the controller. Finally, the proposed algorithm is compared with a tracking controller previously reported in the literature. The new algorithm shows better performance in both arm trajectory tracking and pendulum regulation. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Phillips, Veronica J.
2017-01-01
STI is for a fact sheet on the Space Object Query Tool being created by the MDC. When planning launches, NASA must first factor in the tens of thousands of objects already in orbit around the Earth. The number of human-made objects, including nonfunctional spacecraft, abandoned launch vehicle stages, mission-related debris and fragmentation debris orbiting Earth has grown steadily since Sputnik 1 was launched in 1957. Currently, the U.S. Department of Defenses Joint Space Operations Center, or JSpOC, tracks over 15,000 distinct objects and provides data for more than 40,000 objects via its Space-Track program, found at space-track.org.
Uninformative Prior Multiple Target Tracking Using Evidential Particle Filters
NASA Astrophysics Data System (ADS)
Worthy, J. L., III; Holzinger, M. J.
Space situational awareness requires the ability to initialize state estimation from short measurements and the reliable association of observations to support the characterization of the space environment. The electro-optical systems used to observe space objects cannot fully characterize the state of an object given a short, unobservable sequence of measurements. Further, it is difficult to associate these short-arc measurements if many such measurements are generated through the observation of a cluster of satellites, debris from a satellite break-up, or from spurious detections of an object. An optimization based, probabilistic short-arc observation association approach coupled with a Dempster-Shafer based evidential particle filter in a multiple target tracking framework is developed and proposed to address these problems. The optimization based approach is shown in literature to be computationally efficient and can produce probabilities of association, state estimates, and covariances while accounting for systemic errors. Rigorous application of Dempster-Shafer theory is shown to be effective at enabling ignorance to be properly accounted for in estimation by augmenting probability with belief and plausibility. The proposed multiple hypothesis framework will use a non-exclusive hypothesis formulation of Dempster-Shafer theory to assign belief mass to candidate association pairs and generate tracks based on the belief to plausibility ratio. The proposed algorithm is demonstrated using simulated observations of a GEO satellite breakup scenario.
Automated Tracking of Motion and Body Weight for Objective Monitoring of Rats in Colony Housing
Brenneis, Christian; Westhof, Andreas; Holschbach, Jeannine; Michaelis, Martin; Guehring, Hans; Kleinschmidt-Doerr, Kerstin
2017-01-01
Living together in large social communities within an enriched environment stimulates self-motivated activity in rats. We developed a modular housing system in which a single unit can accommodate as many as 48 rats and contains multiple functional areas. This rat colony cage further allowed us to remotely measure body weight and to continuously measure movement, including jumping and stair walking between areas. Compared with pair-housed, age-, strain-, and weight-matched rats in conventional cages, the colony-housed rats exhibited higher body mass indices, had more exploratory behavior, and were more cooperative during handling. Continuous activity tracking revealed that the amount of spontaneous locomotion, such as jumping between levels and running through the staircase, fell after surgery, blood sampling, injections, and behavioral tests to a similar extent regardless of the specific intervention. Data from the automated system allowed us to identify individual rats with significant differences (>2 SD) from other cohoused rats; these rats showed potential health problems, as verified using conventional health scoring. Thus, our rat colony cage permits social interaction and provides a variety of functional areas, thereby perhaps improving animal wellbeing. Furthermore, automated online tracking enabled continuous quantification of spontaneous motion, potentially providing objective measures of animal behavior in various disease models and reducing the need for experimental manipulation. Finally, health monitoring of individual rats was facilitated in an objective manner. PMID:28905711
A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking
Shafiee, Mohammad Javad; Azimifar, Zohreh; Wong, Alexander
2015-01-01
In this work, we introduce a deep-structured conditional random field (DS-CRF) model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering. PMID:26313943
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
Integrated Eye Tracking and Neural Monitoring for Enhanced Assessment of Mild TBI
2016-04-01
but these delays are nearing resolution and we anticipate the initiation of the neuroimaging portion of the study early in Year 3. The fMRI task...resonance imagining ( fMRI ) and diffusion tensor imaging (DTI) to characterize the extent of functional cortical recruitment and white matter injury...respectively. The inclusion of fMRI and DTI will provide an objective basis for cross-validating the EEG and eye tracking system. Both the EEG and eye
2009-04-21
CAPE CANAVERAL, Fla. – On Launch Pad 17-B at Cape Canaveral Air Force Station, a worker attaches solid rocket boosters to a Delta II rocket for launch of the STSS Demonstrator spacecraft. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency on July 29. Photo credit: NASA/Kim Shiflett
2009-05-01
CAPE CANAVERAL, Fla. – The STSS Demonstrator SV-2spacecraft arrives at the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Jack Pfaller (Approved for Public Release 09-MDA-4616 [27 May 09])
2009-04-21
CAPE CANAVERAL, Fla. – On Launch Pad 17-B at Cape Canaveral Air Force Station, solid rocket boosters are attached to a Delta II rocket for launch of the STSS Demonstrator spacecraft. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency on July 29. Photo credit: NASA/Kim Shiflett
2009-04-21
CAPE CANAVERAL, Fla. – On Launch Pad 17-B at Cape Canaveral Air Force Station, solid rocket boosters are installed on a Delta II rocket for launch of the STSS Demonstrator spacecraft. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency on July 29. Photo credit: NASA/Kim Shiflett
Multi-object detection and tracking technology based on hexagonal opto-electronic detector
NASA Astrophysics Data System (ADS)
Song, Yong; Hao, Qun; Li, Xiang
2008-02-01
A novel multi-object detection and tracking technology based on hexagonal opto-electronic detector is proposed, in which (1) a new hexagonal detector, which is composed of 6 linear CCDs, has been firstly developed to achieve the field of view of 360 degree, (2) to achieve the detection and tracking of multi-object with high speed, the object recognition criterions of Object Signal Width Criterion (OSWC) and Horizontal Scale Ratio Criterion (HSRC) are proposed. In this paper, Simulated Experiments have been carried out to verify the validity of the proposed technology, which show that the detection and tracking of multi-object can be achieved with high speed by using the proposed hexagonal detector and the criterions of OSWC and HSRC, indicating that the technology offers significant advantages in Photo-electric Detection, Computer Vision, Virtual Reality, Augment Reality, etc.
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Estes, Sue
2011-01-01
The NASA Applied Sciences Program's public health initiative began in 2004 to illustratethe potential benefits for using remote sensing in public health applications. Objectives/Purpose: The CDC initiated a st udy with NASA through the National Center for Environmental Health (NCEH) to establish a pilot effort to use remote sensing data as part of its Environmental Public Health Tracking Network (EPHTN). As a consequence, the NCEH and NASA developed a project called HELIX-Atlanta (Health and Environment Linkage for Information Exchange) to demonstrate a process for developing a local environmental public health tracking and surveillance network that integrates non-infectious health and environment systems for the Atlanta metropolitan area. Methods: As an ongo ing, systematic integration, analysis and interpretation of data, an EPHTN focuses on: 1 -- environmental hazards; 2 -- human exposure to environmental hazards; and 3 -- health effects potentially related to exposure to environmental hazards. To satisfy the definition of a surveillance system the data must be disseminated to plan, implement, and evaluate environmental public health action. Results: A close working r elationship developed with NCEH where information was exchanged to assist in the development of an EPHTN that incorporated NASA remote sensing data into a surveillance network for disseminating public health tracking information to users. This project?s success provided NASA with the opportunity to work with other public health entities such as the University of Mississippi Medical Center, the University of New Mexico and the University of Arizona. Conclusions: HELIX-Atlanta became a functioning part of the national EPHTN for tracking environmental hazards and exposure, particularly as related to air quality over Atlanta. Learning Objectives: 1 -- remote sensing data can be integral to an EPHTN; 2 -- public tracking objectives can be enhanced through remote sensing data; 3 -- NASA's involvement in public health applications can have wider benefits in the future.
NASA Technical Reports Server (NTRS)
Agurok, Llya
2013-01-01
The Hyperspectral Imager-Tracker (HIT) is a technique for visualization and tracking of low-contrast, fast-moving objects. The HIT architecture is based on an innovative and only recently developed concept in imaging optics. This innovative architecture will give the Light Prescriptions Innovators (LPI) HIT the possibility of simultaneously collecting the spectral band images (hyperspectral cube), IR images, and to operate with high-light-gathering power and high magnification for multiple fast- moving objects. Adaptive Spectral Filtering algorithms will efficiently increase the contrast of low-contrast scenes. The most hazardous parts of a space mission are the first stage of a launch and the last 10 kilometers of the landing trajectory. In general, a close watch on spacecraft operation is required at distances up to 70 km. Tracking at such distances is usually associated with the use of radar, but its milliradian angular resolution translates to 100- m spatial resolution at 70-km distance. With sufficient power, radar can track a spacecraft as a whole object, but will not provide detail in the case of an accident, particularly for small debris in the onemeter range, which can only be achieved optically. It will be important to track the debris, which could disintegrate further into more debris, all the way to the ground. Such fragmentation could cause ballistic predictions, based on observations using high-resolution but narrow-field optics for only the first few seconds of the event, to be inaccurate. No optical imager architecture exists to satisfy NASA requirements. The HIT was developed for space vehicle tracking, in-flight inspection, and in the case of an accident, a detailed recording of the event. The system is a combination of five subsystems: (1) a roving fovea telescope with a wide 30 field of regard; (2) narrow, high-resolution fovea field optics; (3) a Coude optics system for telescope output beam stabilization; (4) a hyperspectral-mutispectral imaging assembly; and (5) image analysis software with effective adaptive spectral filtering algorithm for real-time contrast enhancement.
Method and apparatus for imaging through 3-dimensional tracking of protons
NASA Technical Reports Server (NTRS)
Ryan, James M. (Inventor); Macri, John R. (Inventor); McConnell, Mark L. (Inventor)
2001-01-01
A method and apparatus for creating density images of an object through the 3-dimensional tracking of protons that have passed through the object are provided. More specifically, the 3-dimensional tracking of the protons is accomplished by gathering and analyzing images of the ionization tracks of the protons in a closely packed stack of scintillating fibers.
Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking
Qu, Shiru
2016-01-01
Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710
New generation of naval IRST: example of EOMS NG
NASA Astrophysics Data System (ADS)
Maltese, Dominique; Deyla, Olivier; Vernet, Guillaume; Preux, Carole; Hilt, Gisèle; Nougues, Pierre-Olivier, II
2010-04-01
Modern warships ranging from Air Warfare Destroyers to Offshore Patrol Vessels (OPV) and Fast Patrol Boats have to deal with an ever increasing variety of threats, both symmetric and asymmetric, for self-protection. This last category has introduced new requirements for combat systems sensors and effectors: situation awareness in proximity of the own ship has become a priority, as well as the need for new, lethal or non-lethal effectors for timely and proportional response. Naval Combat Systems (CS) architects are then faced with an alternative: they can either use existing CS sensors, C2 and weapons, or else rely on new, specialized equipments. Both approaches have their pros and cons, with the cost issue not necessarily trivial to assess. In this paper, we present a multifunction system that is both a passive IRST (InfraRed Search and Track) sensor, designed to automatically detect and track air and surface threats, and an Electro Optical Director (EOD), capable of providing identification of objects as well as accurate 3D tracks. Following an introduction reviewing the design goals for the equipment, the EOMS NG processing architecture is described (Image & Tracking Processes). Then, system performances are presented for different scenarios provided from Field Tests.
Space Debris Measurements using the Advanced Modular Incoherent Scatter Radar
NASA Astrophysics Data System (ADS)
Nicolls, M.
The Advanced Modular Incoherent Scatter Radar (AMISR) is a modular, mobile UHF phased-array radar facility developed and used for scientific studies of the ionosphere. The radars are completely remotely operated and allow for pulse-to-pulse beam steering over the field-of-view. A satellite and debris tracking capability fully interleaved with scientific operations has been developed, and the AMISR systems are now used to routinely observe LEO space debris, with the ability to simultaneously track and detect multiple objects. The system makes use of wide-bandwidth radar pulses and coherent processing to detect objects as small as 5-10 cm in size through LEO, achieving a range resolution better than 20 meters for LEO targets. The interleaved operations allow for ionospheric effects on UHF space debris measurements, such as dispersion, to be assessed. The radar architecture, interleaved operations, and impact of space weather on the measurements will be discussed.
Suppression of fixed pattern noise for infrared image system
NASA Astrophysics Data System (ADS)
Park, Changhan; Han, Jungsoo; Bae, Kyung-Hoon
2008-04-01
In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.