Sample records for tracking moving objects

  1. Self-motion impairs multiple-object tracking.

    PubMed

    Thomas, Laura E; Seiffert, Adriane E

    2010-10-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Real-time object detection, tracking and occlusion reasoning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divakaran, Ajay; Yu, Qian; Tamrakar, Amir

    A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.

  3. A-Track: Detecting Moving Objects in FITS images

    NASA Astrophysics Data System (ADS)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2017-04-01

    A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.

  4. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  5. How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking

    PubMed Central

    Thomas, Laura E.; Seiffert, Adriane E.

    2011-01-01

    Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259

  6. Upside-down: Perceived space affects object-based attention.

    PubMed

    Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus

    2017-07-01

    Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Constraints on Multiple Object Tracking in Williams Syndrome: How Atypical Development Can Inform Theories of Visual Processing

    ERIC Educational Resources Information Center

    Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara

    2016-01-01

    The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…

  8. Brain Activation during Spatial Updating and Attentive Tracking of Moving Targets

    ERIC Educational Resources Information Center

    Jahn, Georg; Wendt, Julia; Lotze, Martin; Papenmeier, Frank; Huff, Markus

    2012-01-01

    Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed…

  9. Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking

    PubMed Central

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods. PMID:23843739

  10. Memory-based multiagent coevolution modeling for robust moving object tracking.

    PubMed

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.

  11. Moving object detection and tracking in videos through turbulent medium

    NASA Astrophysics Data System (ADS)

    Halder, Kalyan Kumar; Tahtali, Murat; Anavatti, Sreenatha G.

    2016-06-01

    This paper addresses the problem of identifying and tracking moving objects in a video sequence having a time-varying background. This is a fundamental task in many computer vision applications, though a very challenging one because of turbulence that causes blurring and spatiotemporal movements of the background images. Our proposed approach involves two major steps. First, a moving object detection algorithm that deals with the detection of real motions by separating the turbulence-induced motions using a two-level thresholding technique is used. In the second step, a feature-based generalized regression neural network is applied to track the detected objects throughout the frames in the video sequence. The proposed approach uses the centroid and area features of the moving objects and creates the reference regions instantly by selecting the objects within a circle. Simulation experiments are carried out on several turbulence-degraded video sequences and comparisons with an earlier method confirms that the proposed approach provides a more effective tracking of the targets.

  12. Evidence against a speed limit in multiple-object tracking.

    PubMed

    Franconeri, S L; Lin, J Y; Pylyshyn, Z W; Fisher, B; Enns, J T

    2008-08-01

    Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.

  13. Another Way of Tracking Moving Objects Using Short Video Clips

    ERIC Educational Resources Information Center

    Vera, Francisco; Romanque, Cristian

    2009-01-01

    Physics teachers have long employed video clips to study moving objects in their classrooms and instructional labs. A number of approaches exist, both free and commercial, for tracking the coordinates of a point using video. The main characteristics of the method described in this paper are: it is simple to use; coordinates can be tracked using…

  14. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  15. The Role of Visual Working Memory in Attentive Tracking of Unique Objects

    ERIC Educational Resources Information Center

    Makovski, Tal; Jiang, Yuhong V.

    2009-01-01

    When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…

  16. A-Track: A new approach for detection of moving objects in FITS images

    NASA Astrophysics Data System (ADS)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2016-10-01

    We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.

  17. A mathematical model for computer image tracking.

    PubMed

    Legters, G R; Young, T Y

    1982-06-01

    A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.

  18. Exhausting Attentional Tracking Resources with a Single Fast-Moving Object

    ERIC Educational Resources Information Center

    Holcombe, Alex O.; Chen, Wei-Ying

    2012-01-01

    Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional…

  19. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  20. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    NASA Astrophysics Data System (ADS)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  1. The research on the mean shift algorithm for target tracking

    NASA Astrophysics Data System (ADS)

    CAO, Honghong

    2017-06-01

    The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.

  2. Visual attention is required for multiple object tracking.

    PubMed

    Tran, Annie; Hoffman, James E

    2016-12-01

    In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Visual Sensor Based Abnormal Event Detection with Moving Shadow Removal in Home Healthcare Applications

    PubMed Central

    Lee, Young-Sook; Chung, Wan-Young

    2012-01-01

    Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486

  4. Dynamic Binding of Identity and Location Information: A Serial Model of Multiple Identity Tracking

    ERIC Educational Resources Information Center

    Oksama, Lauri; Hyona, Jukka

    2008-01-01

    Tracking of multiple moving objects is commonly assumed to be carried out by a fixed-capacity parallel mechanism. The present study proposes a serial model (MOMIT) to explain performance accuracy in the maintenance of multiple moving objects with distinct identities. A serial refresh mechanism is postulated, which makes recourse to continuous…

  5. Super-resolution imaging applied to moving object tracking

    NASA Astrophysics Data System (ADS)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  6. Tracker Toolkit

    NASA Technical Reports Server (NTRS)

    Lewis, Steven J.; Palacios, David M.

    2013-01-01

    This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).

  7. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  8. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  9. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects.

    PubMed

    Kang, Ziho; Mandal, Saptarshi; Crutchfield, Jerry; Millan, Angel; McClung, Sarah N

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.

  10. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    PubMed Central

    Mandal, Saptarshi

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830

  11. Tracking moving targets behind a scattering medium via speckle correlation.

    PubMed

    Guo, Chengfei; Liu, Jietao; Wu, Tengfei; Zhu, Lei; Shao, Xiaopeng

    2018-02-01

    Tracking moving targets behind a scattering medium is a challenge, and it has many important applications in various fields. Owing to the multiple scattering, instead of the object image, only a random speckle pattern can be received on the camera when light is passing through highly scattering layers. Significantly, an important feature of a speckle pattern has been found, and it showed the target information can be derived from the speckle correlation. In this work, inspired by the notions used in computer vision and deformation detection, by specific simulations and experiments, we demonstrate a simple object tracking method, in which by using the speckle correlation, the movement of a hidden object can be tracked in the lateral direction and axial direction. In addition, the rotation state of the moving target can also be recognized by utilizing the autocorrelation of a speckle. This work will be beneficial for biomedical applications in the fields of quantitative analysis of the working mechanisms of a micro-object and the acquisition of dynamical information of the micro-object motion.

  12. Grouping and trajectory storage in multiple object tracking: impairments due to common item motions.

    PubMed

    Suganuma, Mutsumi; Yokosawa, Kazuhiko

    2006-01-01

    In our natural viewing, we notice that objects change their locations across space and time. However, there has been relatively little consideration of the role of motion information in the construction and maintenance of object representations. We investigated this question in the context of the multiple object tracking (MOT) paradigm, wherein observers must keep track of target objects as they move randomly amid featurally identical distractors. In three experiments, we observed impairments in tracking ability when the motions of the target and distractor items shared particular properties. Specifically, we observed impairments when the target and distractor items were in a chasing relationship or moved in a uniform direction. Surprisingly, tracking ability was impaired by these manipulations even when observers failed to notice them. Our results suggest that differentiable trajectory information is an important factor in successful performance of MOT tasks. More generally, these results suggest that various types of common motion can serve as cues to form more global object representations even in the absence of other grouping cues.

  13. Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing

    NASA Astrophysics Data System (ADS)

    Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.

    2009-05-01

    A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.

  14. Tracking multiple objects is limited only by object spacing, not by speed, time, or capacity.

    PubMed

    Franconeri, S L; Jonathan, S V; Scimeca, J M

    2010-07-01

    In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.

  15. A mobile agent-based moving objects indexing algorithm in location based service

    NASA Astrophysics Data System (ADS)

    Fang, Zhixiang; Li, Qingquan; Xu, Hong

    2006-10-01

    This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.

  16. Real-time moving objects detection and tracking from airborne infrared camera

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.

  17. Eye tracking a self-moved target with complex hand-target dynamics

    PubMed Central

    Landelle, Caroline; Montagnini, Anna; Madelain, Laurent

    2016-01-01

    Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring mapping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ∼5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. PMID:27466129

  18. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera: a level sets PDEs approach with concurrent camera motion compensation.

    PubMed

    Feghali, Rosario; Mitiche, Amar

    2004-11-01

    The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.

  19. Multiple-Object Tracking in Children: The "Catch the Spies" Task

    ERIC Educational Resources Information Center

    Trick, L.M.; Jaspers-Fayer, F.; Sethi, N.

    2005-01-01

    Multiple-object tracking involves simultaneously tracking positions of a number of target-items as they move among distractors. The standard version of the task poses special challenges for children, demanding extended concentration and the ability to distinguish targets from identical-looking distractors, and may thus underestimate children's…

  20. Normal aging delays and compromises early multifocal visual attention during object tracking.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2013-02-01

    Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.

  1. Developmental Profiles for Multiple Object Tracking and Spatial Memory: Typically Developing Preschoolers and People with Williams Syndrome

    ERIC Educational Resources Information Center

    O'Hearn, Kirsten; Hoffman, James E.; Landau, Barbara

    2010-01-01

    The ability to track moving objects, a crucial skill for mature performance on everyday spatial tasks, has been hypothesized to require a specialized mechanism that may be available in infancy (i.e. indexes). Consistent with the idea of specialization, our previous work showed that object tracking was more impaired than a matched spatial memory…

  2. Attentional Signatures of Perception: Multiple Object Tracking Reveals the Automaticity of Contour Interpolation

    ERIC Educational Resources Information Center

    Keane, Brian P.; Mettler, Everett; Tsoi, Vicky; Kellman, Philip J.

    2011-01-01

    Multiple object tracking (MOT) is an attentional task wherein observers attempt to track multiple targets among moving distractors. Contour interpolation is a perceptual process that fills-in nonvisible edges on the basis of how surrounding edges (inducers) are spatiotemporally related. In five experiments, we explored the automaticity of…

  3. Splitting attention reduces temporal resolution from 7 Hz for tracking one object to <3 Hz when tracking three.

    PubMed

    Holcombe, Alex O; Chen, Wei-Ying

    2013-01-09

    Overall performance when tracking moving targets is known to be poorer for larger numbers of targets, but the specific effect on tracking's temporal resolution has never been investigated. We document a broad range of display parameters for which visual tracking is limited by temporal frequency (the interval between when a target is at each location and a distracter moves in and replaces it) rather than by object speed. We tested tracking of one, two, and three moving targets while the eyes remained fixed. Variation of the number of distracters and their speed revealed both speed limits and temporal frequency limits on tracking. The temporal frequency limit fell from 7 Hz with one target to 4 Hz with two targets and 2.6 Hz with three targets. The large size of this performance decrease implies that in the two-target condition participants would have done better by tracking only one of the two targets and ignoring the other. These effects are predicted by serial models involving a single tracking focus that must switch among the targets, sampling the position of only one target at a time. If parallel processing theories are to explain why dividing the tracking resource reduces temporal resolution so markedly, supplemental assumptions will be required.

  4. Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles

    NASA Astrophysics Data System (ADS)

    Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang

    2018-01-01

    Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.

  5. Robust feedback zoom tracking for digital video surveillance.

    PubMed

    Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong

    2012-01-01

    Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance.

  6. Study of moving object detecting and tracking algorithm for video surveillance system

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhang, Rongfu

    2010-10-01

    This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.

  7. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.

    PubMed

    Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-08-23

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  8. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    PubMed Central

    Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-01-01

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520

  9. Eye Movements during Multiple Object Tracking: Where Do Participants Look?

    ERIC Educational Resources Information Center

    Fehd, Hilda M.; Seiffert, Adriane E.

    2008-01-01

    Similar to the eye movements you might make when viewing a sports game, this experiment investigated where participants tend to look while keeping track of multiple objects. While eye movements were recorded, participants tracked either 1 or 3 of 8 red dots that moved randomly within a square box on a black background. Results indicated that…

  10. Determination of feature generation methods for PTZ camera object tracking

    NASA Astrophysics Data System (ADS)

    Doyle, Daniel D.; Black, Jonathan T.

    2012-06-01

    Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.

  11. Cortical Circuit for Binding Object Identity and Location During Multiple-Object Tracking

    PubMed Central

    Nummenmaa, Lauri; Oksama, Lauri; Glerean, Erico; Hyönä, Jukka

    2017-01-01

    Abstract Sustained multifocal attention for moving targets requires binding object identities with their locations. The brain mechanisms of identity-location binding during attentive tracking have remained unresolved. In 2 functional magnetic resonance imaging experiments, we measured participants’ hemodynamic activity during attentive tracking of multiple objects with equivalent (multiple-object tracking) versus distinct (multiple identity tracking, MIT) identities. Task load was manipulated parametrically. Both tasks activated large frontoparietal circuits. MIT led to significantly increased activity in frontoparietal and temporal systems subserving object recognition and working memory. These effects were replicated when eye movements were prohibited. MIT was associated with significantly increased functional connectivity between lateral temporal and frontal and parietal regions. We propose that coordinated activity of this network subserves identity-location binding during attentive tracking. PMID:27913430

  12. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  13. Delineating the Neural Signatures of Tracking Spatial Position and Working Memory during Attentive Tracking

    PubMed Central

    Drew, Trafton; Horowitz, Todd S.; Wolfe, Jeremy M.; Vogel, Edward K.

    2015-01-01

    In the attentive tracking task, observers track multiple objects as they move independently and unpredictably among visually identical distractors. Although a number of models of attentive tracking implicate visual working memory as the mechanism responsible for representing target locations, no study has ever directly compared the neural mechanisms of the two tasks. In the current set of experiments, we used electrophysiological recordings to delineate similarities and differences between the neural processing involved in working memory and attentive tracking. We found that the contralateral electrophysiological response to the two tasks was similarly sensitive to the number of items attended in both tasks but that there was also a unique contralateral negativity related to the process of monitoring target position during tracking. This signal was absent for periods of time during tracking tasks when objects briefly stopped moving. These results provide evidence that, during attentive tracking, the process of tracking target locations elicits an electrophysiological response that is distinct and dissociable from neural measures of the number of items being attended. PMID:21228175

  14. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm.

    PubMed

    Tombu, Michael; Seiffert, Adriane E

    2011-04-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target-distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking--one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone.

  15. Phenomenal Permanence and the Development of Predictive Tracking in Infancy

    ERIC Educational Resources Information Center

    Bertenthal, Bennett I.; Longo, Matthew R.; Kenny, Sarah

    2007-01-01

    The perceived spatiotemporal continuity of objects depends on the way they appear and disappear as they move in the spatial layout. This study investigated whether infants' predictive tracking of a briefly occluded object is sensitive to the manner by which the object disappears and reappears. Five-, 7-, and 9-month-old infants were shown a ball…

  16. Robust Feedback Zoom Tracking for Digital Video Surveillance

    PubMed Central

    Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong

    2012-01-01

    Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called “trace curve”, which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance. PMID:22969388

  17. Robot Grasps Rotating Object

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.; Tso, Kam S.; Litwin, Todd E.; Hayati, Samad A.; Bon, Bruce B.

    1991-01-01

    Experimental robotic system semiautomatically grasps rotating object, stops rotation, and pulls object to rest in fixture. Based on combination of advanced techniques for sensing and control, constructed to test concepts for robotic recapture of spinning artificial satellites. Potential terrestrial applications for technology developed with help of system includes tracking and grasping of industrial parts on conveyor belts, tracking of vehicles and animals, and soft grasping of moving objects in general.

  18. Indoor Trajectory Tracking Scheme Based on Delaunay Triangulation and Heuristic Information in Wireless Sensor Networks.

    PubMed

    Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong

    2017-06-02

    Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.

  19. Hardware accelerator design for tracking in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  20. Attentional enhancement during multiple-object tracking.

    PubMed

    Drew, Trafton; McCollough, Andrew W; Horowitz, Todd S; Vogel, Edward K

    2009-04-01

    What is the role of attention in multiple-object tracking? Does attention enhance target representations, suppress distractor representations, or both? It is difficult to ask this question in a purely behavioral paradigm without altering the very attentional allocation one is trying to measure. In the present study, we used event-related potentials to examine the early visual evoked responses to task-irrelevant probes without requiring an additional detection task. Subjects tracked two targets among four moving distractors and four stationary distractors. Brief probes were flashed on targets, moving distractors, stationary distractors, or empty space. We obtained a significant enhancement of the visually evoked P1 and N1 components (approximately 100-150 msec) for probes on targets, relative to distractors. Furthermore, good trackers showed larger differences between target and distractor probes than did poor trackers. These results provide evidence of early attentional enhancement of tracked target items and also provide a novel approach to measuring attentional allocation during tracking.

  1. System and method for tracking a signal source. [employing feedback control

    NASA Technical Reports Server (NTRS)

    Mogavero, L. N.; Johnson, E. G.; Evans, J. M., Jr.; Albus, J. S. (Inventor)

    1978-01-01

    A system for tracking moving signal sources is disclosed which is particularly adaptable for use in tracking stage performers. A miniature transmitter is attached to the person or object to be tracked and emits a detectable signal of a predetermined frequency. A plurality of detectors positioned in a preset pattern sense the signal and supply output information to a phase detector which applies signals representing the angular orientation of the transmitter to a computer. The computer provides command signals to a servo network which drives a device such as a motor driven mirror reflecting the beam of a spotlight, to track the moving transmitter.

  2. Tracking moving identities: after attending the right location, the identity does not come for free.

    PubMed

    Pinto, Yaïr; Scholte, H Steven; Lamme, V A F

    2012-01-01

    Although tracking identical moving objects has been studied since the 1980's, only recently the study into tracking moving objects with distinct identities has started (referred to as Multiple Identity Tracking, MIT). So far, only behavioral studies into MIT have been undertaken. These studies have left a fundamental question regarding MIT unanswered, is MIT a one-stage or a two-stage process? According to the one-stage model, after a location has been attended, the identity is released without effort. However, according to the two-stage model, there are two effortful stages in MIT, attending to a location, and attending to the identity of the object at that location. In the current study we investigated this question by measuring brain activity in response to tracking familiar and unfamiliar targets. Familiarity is known to automate effortful processes, so if attention to identify the object is needed, this should become easier. However, if no such attention is needed, familiarity can only affect other processes (such as memory for the target set). Our results revealed that on unfamiliar trials neural activity was higher in both attentional networks, and visual identification networks. These results suggest that familiarity in MIT automates attentional identification processes, thus suggesting that attentional identification is needed in MIT. This then would imply that MIT is essentially a two-stage process, since after attending the location, the identity does not seem to come for free.

  3. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    PubMed

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  4. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm

    PubMed Central

    Tombu, Michael

    2014-01-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target–distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking—one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone. PMID:21264704

  5. Space debris tracking based on fuzzy running Gaussian average adaptive particle filter track-before-detect algorithm

    NASA Astrophysics Data System (ADS)

    Torteeka, Peerapong; Gao, Peng-Qi; Shen, Ming; Guo, Xiao-Zhang; Yang, Da-Tao; Yu, Huan-Huan; Zhou, Wei-Ping; Zhao, You

    2017-02-01

    Although tracking with a passive optical telescope is a powerful technique for space debris observation, it is limited by its sensitivity to dynamic background noise. Traditionally, in the field of astronomy, static background subtraction based on a median image technique has been used to extract moving space objects prior to the tracking operation, as this is computationally efficient. The main disadvantage of this technique is that it is not robust to variable illumination conditions. In this article, we propose an approach for tracking small and dim space debris in the context of a dynamic background via one of the optical telescopes that is part of the space surveillance network project, named the Asia-Pacific ground-based Optical Space Observation System or APOSOS. The approach combines a fuzzy running Gaussian average for robust moving-object extraction with dim-target tracking using a particle-filter-based track-before-detect method. The performance of the proposed algorithm is experimentally evaluated, and the results show that the scheme achieves a satisfactory level of accuracy for space debris tracking.

  6. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  7. Multi-view video segmentation and tracking for video surveillance

    NASA Astrophysics Data System (ADS)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  8. Astrometry with A-Track Using Gaia DR1 Catalogue

    NASA Astrophysics Data System (ADS)

    Kılıç, Yücel; Erece, Orhan; Kaplan, Murat

    2018-04-01

    In this work, we built all sky index files from Gaia DR1 catalogue for the high-precision astrometric field solution and the precise WCS coordinates of the moving objects. For this, we used build-astrometry-index program as a part of astrometry.net code suit. Additionally, we added astrometry.net's WCS solution tool to our previously developed software which is a fast and robust pipeline for detecting moving objects such as asteroids and comets in sequential FITS images, called A-Track. Moreover, MPC module was added to A-Track. This module is linked to an asteroid database to name the found objects and prepare the MPC file to report the results. After these innovations, we tested a new version of the A-Track code on photometrical data taken by the SI-1100 CCD with 1-meter telescope at TÜBİTAK National Observatory, Antalya. The pipeline can be used to analyse large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.

  9. Long-term scale adaptive tracking with kernel correlation filters

    NASA Astrophysics Data System (ADS)

    Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui

    2018-04-01

    Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.

  10. Universal Ontology: Attentive Tracking of Objects and Substances across Languages and over Development

    ERIC Educational Resources Information Center

    Cacchione, Trix; Indino, Marcello; Fujita, Kazuo; Itakura, Shoji; Matsuno, Toyomi; Schaub, Simone; Amici, Federica

    2014-01-01

    Previous research has demonstrated that adults are successful at visually tracking rigidly moving items, but experience great difficulties when tracking substance-like "pouring" items. Using a comparative approach, we investigated whether the presence/absence of the grammatical count-mass distinction influences adults and children's…

  11. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  12. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  13. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  14. Real-time Human Activity Recognition

    NASA Astrophysics Data System (ADS)

    Albukhary, N.; Mustafah, Y. M.

    2017-11-01

    The traditional Closed-circuit Television (CCTV) system requires human to monitor the CCTV for 24/7 which is inefficient and costly. Therefore, there’s a need for a system which can recognize human activity effectively in real-time. This paper concentrates on recognizing simple activity such as walking, running, sitting, standing and landing by using image processing techniques. Firstly, object detection is done by using background subtraction to detect moving object. Then, object tracking and object classification are constructed so that different person can be differentiated by using feature detection. Geometrical attributes of tracked object, which are centroid and aspect ratio of identified tracked are manipulated so that simple activity can be detected.

  15. Assessing the performance of a motion tracking system based on optical joint transform correlation

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.

    2015-08-01

    We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.

  16. Lagrangian 3D tracking of fluorescent microscopic objects in motion

    NASA Astrophysics Data System (ADS)

    Darnige, T.; Figueroa-Morales, N.; Bohec, P.; Lindner, A.; Clément, E.

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  17. Lagrangian 3D tracking of fluorescent microscopic objects in motion.

    PubMed

    Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  18. Effects of sport expertise on representational momentum during timing control.

    PubMed

    Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu

    2015-04-01

    Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.

  19. Differential Contributions of Development and Learning to Infants' Knowledge of Object Continuity and Discontinuity

    ERIC Educational Resources Information Center

    Bertenthal, Bennett I.; Gredeback, Gustaf; Boyer, Ty W.

    2013-01-01

    Sixty infants divided evenly between 5 and 7 months of age were tested for their knowledge of object continuity versus discontinuity with a predictive tracking task. The stimulus event consisted of a moving ball that was briefly occluded for 20 trials. Both age groups predictively tracked the ball when it disappeared and reappeared via occlusion,…

  20. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  1. Interactive Multiple Object Tracking (iMOT)

    PubMed Central

    Thornton, Ian M.; Bülthoff, Heinrich H.; Horowitz, Todd S.; Rynning, Aksel; Lee, Seong-Whan

    2014-01-01

    We introduce a new task for exploring the relationship between action and attention. In this interactive multiple object tracking (iMOT) task, implemented as an iPad app, participants were presented with a display of multiple, visually identical disks which moved independently. The task was to prevent any collisions during a fixed duration. Participants could perturb object trajectories via the touchscreen. In Experiment 1, we used a staircase procedure to measure the ability to control moving objects. Object speed was set to 1°/s. On average participants could control 8.4 items without collision. Individual control strategies were quite variable, but did not predict overall performance. In Experiment 2, we compared iMOT with standard MOT performance using identical displays. Object speed was set to 2°/s. Participants could reliably control more objects (M = 6.6) than they could track (M = 4.0), but performance in the two tasks was positively correlated. In Experiment 3, we used a dual-task design. Compared to single-task baseline, iMOT performance decreased and MOT performance increased when the two tasks had to be completed together. Overall, these findings suggest: 1) There is a clear limit to the number of items that can be simultaneously controlled, for a given speed and display density; 2) participants can control more items than they can track; 3) task-relevant action appears not to disrupt MOT performance in the current experimental context. PMID:24498288

  2. Phenomenal permanence and the development of predictive tracking in infancy.

    PubMed

    Bertenthal, Bennett I; Longo, Matthew R; Kenny, Sarah

    2007-01-01

    The perceived spatiotemporal continuity of objects depends on the way they appear and disappear as they move in the spatial layout. This study investigated whether infants' predictive tracking of a briefly occluded object is sensitive to the manner by which the object disappears and reappears. Five-, 7-, and 9-month-old infants were shown a ball rolling across a visual scene and briefly disappearing via kinetic occlusion, instantaneous disappearance, implosion, or virtual occlusion. Three different measures converged to show that predictive tracking increased with age and that infants were most likely to anticipate the reappearance of the ball following kinetic occlusion. These results suggest that infants' knowledge of the permanence and nonpermanence of objects is embodied in their predictive tracking.

  3. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.

  4. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    PubMed Central

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  5. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  6. Tracking Students' Eye-Movements When Reading Learning Objects on Mobile Phones: A Discourse Analysis of Luganda Language Teacher-Trainees' Reflective Observations

    ERIC Educational Resources Information Center

    Kabugo, David; Muyinda, Paul B.; Masagazi, Fred. M.; Mugagga, Anthony M.; Mulumba, Mathias B.

    2016-01-01

    Although eye-tracking technologies such as Tobii-T120/TX and Eye-Tribe are steadily becoming ubiquitous, and while their appropriation in education can aid teachers to collect robust information on how students move their eyes when reading and engaging with different learning objects, many teachers of Luganda language are yet to gain experiences…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Michael; Nemati, Bijan; Zhai, Chengxing

    We present an approach that significantly increases the sensitivity for finding and tracking small and fast near-Earth asteroids (NEAs). This approach relies on a combined use of a new generation of high-speed cameras which allow short, high frame-rate exposures of moving objects, effectively 'freezing' their motion, and a computationally enhanced implementation of the 'shift-and-add' data processing technique that helps to improve the signal-to-noise ratio (SNR) for detection of NEAs. The SNR of a single short exposure of a dim NEA is insufficient to detect it in one frame, but by computationally searching for an appropriate velocity vector, shifting successive framesmore » relative to each other and then co-adding the shifted frames in post-processing, we synthetically create a long-exposure image as if the telescope were tracking the object. This approach, which we call 'synthetic tracking,' enhances the familiar shift-and-add technique with the ability to do a wide blind search, detect, and track dim and fast-moving NEAs in near real time. We discuss also how synthetic tracking improves the astrometry of fast-moving NEAs. We apply this technique to observations of two known asteroids conducted on the Palomar 200 inch telescope and demonstrate improved SNR and 10 fold improvement of astrometric precision over the traditional long-exposure approach. In the past 5 yr, about 150 NEAs with absolute magnitudes H = 28 (∼10 m in size) or fainter have been discovered. With an upgraded version of our camera and a field of view of (28 arcmin){sup 2} on the Palomar 200 inch telescope, synthetic tracking could allow detecting up to 180 such objects per night, including very small NEAs with sizes down to 7 m.« less

  8. Electrically tunable lens speeds up 3D orbital tracking

    PubMed Central

    Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico

    2015-01-01

    3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037

  9. Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos

    NASA Astrophysics Data System (ADS)

    Juneja, Medha; Grover, Priyanka

    2013-12-01

    Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.

  10. Approach for counting vehicles in congested traffic flow

    NASA Astrophysics Data System (ADS)

    Tan, Xiaojun; Li, Jun; Liu, Wei

    2005-02-01

    More and more image sensors are used in intelligent transportation systems. In practice, occlusion is always a problem when counting vehicles in congested traffic. This paper tries to present an approach to solve the problem. The proposed approach consists of three main procedures. Firstly, a new algorithm of background subtraction is performed. The aim is to segment moving objects from an illumination-variant background. Secondly, object tracking is performed, where the CONDENSATION algorithm is used. This can avoid the problem of matching vehicles in successive frames. Thirdly, an inspecting procedure is executed to count the vehicles. When a bus firstly occludes a car and then the bus moves away a few frames later, the car will appear in the scene. The inspecting procedure should find the "new" car and add it as a tracking object.

  11. Weighted feature selection criteria for visual servoing of a telerobot

    NASA Technical Reports Server (NTRS)

    Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.

    1989-01-01

    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.

  12. Feature-based interference from unattended visual field during attentional tracking in younger and older adults.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2011-02-01

    The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.

  13. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search.

    PubMed

    Liu, Meiqin; Zhang, Duo; Zhang, Senlin; Zhang, Qunfei

    2017-12-04

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme.

  14. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search

    PubMed Central

    Zhang, Senlin; Zhang, Qunfei

    2017-01-01

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme. PMID:29207541

  15. Moving Particles Through a Finite Element Mesh

    PubMed Central

    Peskin, Adele P.; Hardin, Gary R.

    1998-01-01

    We present a new numerical technique for modeling the flow around multiple objects moving in a fluid. The method tracks the dynamic interaction between each particle and the fluid. The movements of the fluid and the object are directly coupled. A background mesh is designed to fit the geometry of the overall domain. The mesh is designed independently of the presence of the particles except in terms of how fine it must be to track particles of a given size. Each particle is represented by a geometric figure that describes its boundary. This figure overlies the mesh. Nodes are added to the mesh where the particle boundaries intersect the background mesh, increasing the number of nodes contained in each element whose boundary is intersected. These additional nodes are then used to describe and track the particle in the numerical scheme. Appropriate element shape functions are defined to approximate the solution on the elements with extra nodes. The particles are moved through the mesh by moving only the overlying nodes defining the particles. The regular finite element grid remains unchanged. In this method, the mesh does not distort as the particles move. Instead, only the placement of particle-defining nodes changes as the particles move. Element shape functions are updated as the nodes move through the elements. This method is especially suited for models of moderate numbers of moderate-size particles, where the details of the fluid-particle coupling are important. Both the complications of creating finite element meshes around appreciable numbers of particles, and extensive remeshing upon movement of the particles are simplified in this method. PMID:28009377

  16. Radar Detection of Marine Mammals

    DTIC Science & Technology

    2010-09-30

    associative tracker using the Munkres algorithm was used. This was then expanded to include a track - before - detect algorithm, the Baysean Field...small, slow moving objects (i.e. whales). In order to address the third concern (M2 mode), we have tested using a track - before - detect tracker termed

  17. A Fast MEANSHIFT Algorithm-Based Target Tracking System

    PubMed Central

    Sun, Jian

    2012-01-01

    Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397

  18. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    NASA Astrophysics Data System (ADS)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  19. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  20. Specialization of Perceptual Processes.

    DTIC Science & Technology

    1994-09-01

    population rose and fell, furniture was rearranged, a small mountain range was built in part of the lab (really), carpets were shampooed , and oce lighting...common task is the tracking of moving objects. Coombs [22] implemented a system 44 for xating and tracking objects using a stereo eye/ head system...be a person (person?). Finally, a motion unit is used to detect foot gestures. A pair of nod-of-the- head detectors were implemented and tested, but

  1. A framework for activity detection in wide-area motion imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D

    2009-01-01

    Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less

  2. KSC-2009-5066

    NASA Image and Video Library

    2009-08-27

    CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft moves out of the Astrotech payload processing facility. It is being moved to Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller

  3. Detection of dominant flow and abnormal events in surveillance video

    NASA Astrophysics Data System (ADS)

    Kwak, Sooyeong; Byun, Hyeran

    2011-02-01

    We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.

  4. An open source framework for tracking and state estimation ('Stone Soup')

    NASA Astrophysics Data System (ADS)

    Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger

    2017-05-01

    The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,

  5. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  6. Object Tracking and Target Reacquisition Based on 3-D Range Data for Moving Vehicles

    PubMed Central

    Lee, Jehoon; Lankton, Shawn; Tannenbaum, Allen

    2013-01-01

    In this paper, we propose an approach for tracking an object of interest based on 3-D range data. We employ particle filtering and active contours to simultaneously estimate the global motion of the object and its local deformations. The proposed algorithm takes advantage of range information to deal with the challenging (but common) situation in which the tracked object disappears from the image domain entirely and reappears later. To cope with this problem, a method based on principle component analysis (PCA) of shape information is proposed. In the proposed method, if the target disappears out of frame, shape similarity energy is used to detect target candidates that match a template shape learned online from previously observed frames. Thus, we require no a priori knowledge of the target’s shape. Experimental results show the practical applicability and robustness of the proposed algorithm in realistic tracking scenarios. PMID:21486717

  7. Execution of saccadic eye movements affects speed perception

    PubMed Central

    Goettker, Alexander; Braun, Doris I.; Schütz, Alexander C.; Gegenfurtner, Karl R.

    2018-01-01

    Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades. PMID:29440494

  8. Dynamic Projection Mapping onto Deforming Non-Rigid Surface Using Deformable Dot Cluster Marker.

    PubMed

    Narita, Gaku; Watanabe, Yoshihiro; Ishikawa, Masatoshi

    2017-03-01

    Dynamic projection mapping for moving objects has attracted much attention in recent years. However, conventional approaches have faced some issues, such as the target objects being limited to rigid objects, and the limited moving speed of the targets. In this paper, we focus on dynamic projection mapping onto rapidly deforming non-rigid surfaces with a speed sufficiently high that a human does not perceive any misalignment between the target object and the projected images. In order to achieve such projection mapping, we need a high-speed technique for tracking non-rigid surfaces, which is still a challenging problem in the field of computer vision. We propose the Deformable Dot Cluster Marker (DDCM), a novel fiducial marker for high-speed tracking of non-rigid surfaces using a high-frame-rate camera. The DDCM has three performance advantages. First, it can be detected even when it is strongly deformed. Second, it realizes robust tracking even in the presence of external and self occlusions. Third, it allows millisecond-order computational speed. Using DDCM and a high-speed projector, we realized dynamic projection mapping onto a deformed sheet of paper and a T-shirt with a speed sufficiently high that the projected images appeared to be printed on the objects.

  9. Obscura telescope with a MEMS micromirror array for space observation of transient luminous phenomena or fast-moving objects.

    PubMed

    Park, J H; Garipov, G K; Jeon, J A; Khrenov, B A; Kim, J E; Kim, M; Kim, Y K; Lee, C-H; Lee, J; Na, G W; Nam, S; Park, I H; Park, Y-S

    2008-12-08

    We introduce a novel telescope consisting of a pinhole-like camera with rotatable MEMS micromirrors substituting for pinholes. The design is ideal for observations of transient luminous phenomena or fast-moving objects, such as upper atmospheric lightning and bright gamma ray bursts. The advantage of the MEMS "obscura telescope" over conventional cameras is that it is capable both of searching for events over a wide field of view, and fast zooming to allow detailed investigation of the structure of events. It is also able to track the triggering object to investigate its space-time development, and to center the interesting portion of the image on the photodetector array. We present the proposed system and the test results for the MEMS obscura telescope which has a field of view of 11.3 degrees, sixteen times zoom-in and tracking within 1 ms. (c) 2008 Optical Society of America

  10. Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas

    2016-06-01

    Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.

  11. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    NASA Astrophysics Data System (ADS)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  12. A-Track: A New Approach for Detection of Moving Objects in FITS Images

    NASA Astrophysics Data System (ADS)

    Kılıç, Yücel; Karapınar, Nurdan; Atay, Tolga; Kaplan, Murat

    2016-07-01

    Small planet and asteroid observations are important for understanding the origin and evolution of the Solar System. In this work, we have developed a fast and robust pipeline, called A-Track, for detecting asteroids and comets in sequential telescope images. The moving objects are detected using a modified line detection algorithm, called ILDA. We have coded the pipeline in Python 3, where we have made use of various scientific modules in Python to process the FITS images. We tested the code on photometrical data taken by an SI-1100 CCD with a 1-meter telescope at TUBITAK National Observatory, Antalya. The pipeline can be used to analyze large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.

  13. Method for targetless tracking subpixel in-plane movements.

    PubMed

    Espinosa, Julian; Perez, Jorge; Ferrer, Belen; Mas, David

    2015-09-01

    We present a targetless motion tracking method for detecting planar movements with subpixel accuracy. This method is based on the computation and tracking of the intersection of two nonparallel straight-line segments in the image of a moving object in a scene. The method is simple and easy to implement because no complex structures have to be detected. It has been tested and validated using a lab experiment consisting of a vibrating object that was recorded with a high-speed camera working at 1000 fps. We managed to track displacements with an accuracy of hundredths of pixel or even of thousandths of pixel in the case of tracking harmonic vibrations. The method is widely applicable because it can be used for distance measuring amplitude and frequency of vibrations with a vision system.

  14. Localization and tracking of moving objects in two-dimensional space by echolocation.

    PubMed

    Matsuo, Ikuo

    2013-02-01

    Bats use frequency-modulated echolocation to identify and capture moving objects in real three-dimensional space. Experimental evidence indicates that bats are capable of locating static objects with a range accuracy of less than 1 μs. A previously introduced model estimates ranges of multiple, static objects using linear frequency modulation (LFM) sound and Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates. The delay time for a single object was estimated with an accuracy of about 1.3 μs by measuring the echo at a low signal-to-noise ratio (SNR). The range accuracy was dependent not only on the SNR but also the Doppler shift, which was dependent on the movements. However, it was unclear whether this model could estimate the moving object range at each timepoint. In this study, echoes were measured from the rotating pole at two receiving points by intermittently emitting LFM sounds. The model was shown to localize moving objects in two-dimensional space by accurately estimating the object's range at each timepoint.

  15. Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements

    ERIC Educational Resources Information Center

    Srihasam, Krishna; Bullock, Daniel; Grossberg, Stephen

    2009-01-01

    Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do…

  16. Location detection and tracking of moving targets by a 2D IR-UWB radar system.

    PubMed

    Nguyen, Van-Han; Pyun, Jae-Young

    2015-03-19

    In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.

  17. Tracking Algorithm of Multiple Pedestrians Based on Particle Filters in Video Sequences

    PubMed Central

    Liu, Yun; Wang, Chuanxu; Zhang, Shujun; Cui, Xuehong

    2016-01-01

    Pedestrian tracking is a critical problem in the field of computer vision. Particle filters have been proven to be very useful in pedestrian tracking for nonlinear and non-Gaussian estimation problems. However, pedestrian tracking in complex environment is still facing many problems due to changes of pedestrian postures and scale, moving background, mutual occlusion, and presence of pedestrian. To surmount these difficulties, this paper presents tracking algorithm of multiple pedestrians based on particle filters in video sequences. The algorithm acquires confidence value of the object and the background through extracting a priori knowledge thus to achieve multipedestrian detection; it adopts color and texture features into particle filter to get better observation results and then automatically adjusts weight value of each feature according to current tracking environment. During the process of tracking, the algorithm processes severe occlusion condition to prevent drift and loss phenomena caused by object occlusion and associates detection results with particle state to propose discriminated method for object disappearance and emergence thus to achieve robust tracking of multiple pedestrians. Experimental verification and analysis in video sequences demonstrate that proposed algorithm improves the tracking performance and has better tracking results. PMID:27847514

  18. Calibration of asynchronous smart phone cameras from moving objects

    NASA Astrophysics Data System (ADS)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  19. Context effects on smooth pursuit and manual interception of a disappearing target.

    PubMed

    Kreyenmeier, Philipp; Fooken, Jolande; Spering, Miriam

    2017-07-01

    In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments ( n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points. NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points. Copyright © 2017 the American Physiological Society.

  20. Transfer of Learning between Hemifields in Multiple Object Tracking: Memory Reduces Constraints of Attention

    PubMed Central

    Lapierre, Mark; Howe, Piers D. L.; Cropper, Simon J.

    2013-01-01

    Many tasks involve tracking multiple moving objects, or stimuli. Some require that individuals adapt to changing or unfamiliar conditions to be able to track well. This study explores processes involved in such adaptation through an investigation of the interaction of attention and memory during tracking. Previous research has shown that during tracking, attention operates independently to some degree in the left and right visual hemifields, due to putative anatomical constraints. It has been suggested that the degree of independence is related to the relative dominance of processes of attention versus processes of memory. Here we show that when individuals are trained to track a unique pattern of movement in one hemifield, that learning can be transferred to the opposite hemifield, without any evidence of hemifield independence. However, learning is not influenced by an explicit strategy of memorisation of brief periods of recognisable movement. The findings lend support to a role for implicit memory in overcoming putative anatomical constraints on the dynamic, distributed spatial allocation of attention involved in tracking multiple objects. PMID:24349555

  1. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  2. Image-based tracking: a new emerging standard

    NASA Astrophysics Data System (ADS)

    Antonisse, Jim; Randall, Scott

    2012-06-01

    Automated moving object detection and tracking are increasingly viewed as solutions to the enormous data volumes resulting from emerging wide-area persistent surveillance systems. In a previous paper we described a Motion Imagery Standards Board (MISB) initiative to help address this problem: the specification of a micro-architecture for the automatic extraction of motion indicators and tracks. This paper reports on the development of an extended specification of the plug-and-play tracking micro-architecture, on its status as an emerging standard across DoD, the Intelligence Community, and NATO.

  3. Direction information in multiple object tracking is limited by a graded resource.

    PubMed

    Horowitz, Todd S; Cohen, Michael A

    2010-10-01

    Is multiple object tracking (MOT) limited by a fixed set of structures (slots), a limited but divisible resource, or both? Here, we answer this question by measuring the precision of the direction representation for tracked targets. The signature of a limited resource is a decrease in precision as the square root of the tracking load. The signature of fixed slots is a fixed precision. Hybrid models predict a rapid decrease to asymptotic precision. In two experiments, observers tracked moving disks and reported target motion direction by adjusting a probe arrow. We derived the precision of representation of correctly tracked targets using a mixture distribution analysis. Precision declined with target load according to the square-root law up to six targets. This finding is inconsistent with both pure and hybrid slot models. Instead, directional information in MOT appears to be limited by a continuously divisible resource.

  4. The Retarding Force on a Fan-Cart Reversing Direction

    ERIC Educational Resources Information Center

    Aurora, Tarlok S.; Brunner, Bernard J.

    2011-01-01

    In introductory physics, students learn that an object tossed upward has a constant downward acceleration while going up, at the highest point and while falling down. To demonstrate this concept, a self-propelled fan cart system is used on a frictionless track. A quick push is given to the fan cart and it is allowed to move away on a track under…

  5. Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects

    NASA Technical Reports Server (NTRS)

    Prasad, Narasimha S.; Rudd, Van; Shald, Scott; Sandford, Stephen; Dimarcantonio, Albert

    2014-01-01

    In this paper, the development of a long range ladar system known as ExoSPEAR at NASA Langley Research Center for tracking rapidly moving resident space objects is discussed. Based on 100 W, nanosecond class, near-IR laser, this ladar system with coherent detection technique is currently being investigated for short dwell time measurements of resident space objects (RSOs) in LEO and beyond for space surveillance applications. This unique ladar architecture is configured using a continuously agile doublet-pulse waveform scheme coupled to a closed-loop tracking and control loop approach to simultaneously achieve mm class range precision and mm/s velocity precision and hence obtain unprecedented track accuracies. Salient features of the design architecture followed by performance modeling and engagement simulations illustrating the dependence of range and velocity precision in LEO orbits on ladar parameters are presented. Estimated limits on detectable optical cross sections of RSOs in LEO orbits are discussed.

  6. Figure–ground discrimination behavior in Drosophila. I. Spatial organization of wing-steering responses

    PubMed Central

    Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.

    2014-01-01

    The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267

  7. Research on infrared small-target tracking technology under complex background

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Wang, Xin; Chen, Jilu; Pan, Tao

    2012-10-01

    In this paper, some basic principles and the implementing flow charts of a series of algorithms for target tracking are described. On the foundation of above works, a moving target tracking software base on the OpenCV is developed by the software developing platform MFC. Three kinds of tracking algorithms are integrated in this software. These two tracking algorithms are Kalman Filter tracking method and Camshift tracking method. In order to explain the software clearly, the framework and the function are described in this paper. At last, the implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. This paper is very significant in the application of the infrared target tracking technology.

  8. Automatic Tracking Algorithm in Coaxial Near-Infrared Laser Ablation Endoscope for Fetus Surgery

    NASA Astrophysics Data System (ADS)

    Hu, Yan; Yamanaka, Noriaki; Masamune, Ken

    2014-07-01

    This article reports a stable vessel object tracking method for the treatment of twin-to-twin transfusion syndrome based on our previous 2 DOF endoscope. During the treatment of laser coagulation, it is necessary to focus on the exact position of the target object, however it moves by the mother's respiratory motion and still remains a challenge to obtain and track the position precisely. In this article, an algorithm which uses features from accelerated segment test (FAST) to extract the features and optical flow as the object tracking method, is proposed to deal with above problem. Further, we experimentally simulate the movement due to the mother's respiration, and the results of position errors and similarity verify the effectiveness of the proposed tracking algorithm for laser ablation endoscopy in-vitro and under water considering two influential factors. At average, the errors are about 10 pixels and the similarity over 0.92 are obtained in the experiments.

  9. Neural substrates of dynamic object occlusion.

    PubMed

    Shuwairi, Sarah M; Curtis, Clayton E; Johnson, Scott P

    2007-08-01

    In everyday environments, objects frequently go out of sight as they move and our view of them becomes obstructed by nearer objects, yet we perceive these objects as continuous and enduring entities. Here, we used functional magnetic resonance imaging with an attentive tracking paradigm to clarify the nature of perceptual and cognitive mechanisms subserving this ability to fill in the gaps in perception of dynamic object occlusion. Imaging data revealed distinct regions of cortex showing increased activity during periods of occlusion relative to full visibility. These regions may support active maintenance of a representation of the target's spatiotemporal properties ensuring that the object is perceived as a persisting entity when occluded. Our findings may shed light on the neural substrates involved in object tracking that give rise to the phenomenon of object permanence.

  10. Interplanetary Dust Observations by the Juno MAG Investigation

    NASA Astrophysics Data System (ADS)

    Jørgensen, John; Benn, Mathias; Denver, Troelz; Connerney, Jack; Jørgensen, Peter; Bolton, Scott; Brauer, Peter; Levin, Steven; Oliversen, Ronald

    2017-04-01

    The spin-stabilized and solar powered Juno spacecraft recently concluded a 5-year voyage through the solar system en route to Jupiter, arriving on July 4th, 2016. During the cruise phase from Earth to the Jovian system, the Magnetometer investigation (MAG) operated two magnetic field sensors and four co-located imaging systems designed to provide accurate attitude knowledge for the MAG sensors. One of these four imaging sensors - camera "D" of the Advanced Stellar Compass (ASC) - was operated in a mode designed to detect all luminous objects in its field of view, recording and characterizing those not found in the on-board star catalog. The capability to detect and track such objects ("non-stellar objects", or NSOs) provides a unique opportunity to sense and characterize interplanetary dust particles. The camera's detection threshold was set to MV9 to minimize false detections and discourage tracking of known objects. On-board filtering algorithms selected only those objects tracked through more than 5 consecutive images and moving with an apparent angular rate between 15"/s and 10,000"/s. The coordinates (RA, DEC), intensity, and apparent velocity of such objects were stored for eventual downlink. Direct detection of proximate dust particles is precluded by their large (10-30 km/s) relative velocity and extreme angular rates, but their presence may be inferred using the collecting area of Juno's large ( 55m2) solar arrays. Dust particles impact the spacecraft at high velocity, creating an expanding plasma cloud and ejecta with modest (few m/s) velocities. These excavated particles are revealed in reflected sunlight and tracked moving away from the spacecraft from the point of impact. Application of this novel detection method during Juno's traversal of the solar system provides new information on the distribution of interplanetary (µm-sized) dust.

  11. An inexpensive programmable illumination microscope with active feedback.

    PubMed

    Tompkins, Nathan; Fraden, Seth

    2016-02-01

    We have developed a programmable illumination system capable of tracking and illuminating numerous objects simultaneously using only low-cost and reused optical components. The active feedback control software allows for a closed-loop system that tracks and perturbs objects of interest automatically. Our system uses a static stage where the objects of interest are tracked computationally as they move across the field of view allowing for a large number of simultaneous experiments. An algorithmically determined illumination pattern can be applied anywhere in the field of view with simultaneous imaging and perturbation using different colors of light to enable spatially and temporally structured illumination. Our system consists of a consumer projector, camera, 35-mm camera lens, and a small number of other optical and scaffolding components. The entire apparatus can be assembled for under $4,000.

  12. High-performance object tracking and fixation with an online neural estimator.

    PubMed

    Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian

    2007-02-01

    Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.

  13. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  14. Dazzle camouflage, target tracking, and the confusion effect.

    PubMed

    Hogan, Benedict G; Cuthill, Innes C; Scott-Samuel, Nicholas E

    2016-01-01

    The influence of coloration on the ecology and evolution of moving animals in groups is poorly understood. Animals in groups benefit from the "confusion effect," where predator attack success is reduced with increasing group size or density. This is thought to be due to a sensory bottleneck: an increase in the difficulty of tracking one object among many. Motion dazzle camouflage has been hypothesized to disrupt accurate perception of the trajectory or speed of an object or animal. The current study investigates the suggestion that dazzle camouflage may enhance the confusion effect. Utilizing a computer game style experiment with human predators, we found that when moving in groups, targets with stripes parallel to the targets' direction of motion interact with the confusion effect to a greater degree, and are harder to track, than those with more conventional background matching patterns. The findings represent empirical evidence that some high-contrast patterns may benefit animals in groups. The results also highlight the possibility that orientation and turning may be more relevant in the mechanisms of dazzle camouflage than previously recognized.

  15. Moving object localization using optical flow for pedestrian detection from a moving vehicle.

    PubMed

    Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun

    2014-01-01

    This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.

  16. Genetics Home Reference: horizontal gaze palsy with progressive scoliosis

    MedlinePlus

    ... to track moving objects. Up-and-down (vertical) eye movements are typically normal. In people with HGPPS , an ... the brainstem is the underlying cause of the eye movement abnormalities associated with the disorder. The cause of ...

  17. Accuracy assessment of the Precise Point Positioning method applied for surveys and tracking moving objects in GIS environment

    NASA Astrophysics Data System (ADS)

    Ilieva, Tamara; Gekov, Svetoslav

    2017-04-01

    The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.

  18. A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors

    PubMed Central

    Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.

    2017-01-01

    Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563

  19. An inexpensive programmable illumination microscope with active feedback

    PubMed Central

    Tompkins, Nathan; Fraden, Seth

    2016-01-01

    We have developed a programmable illumination system capable of tracking and illuminating numerous objects simultaneously using only low-cost and reused optical components. The active feedback control software allows for a closed-loop system that tracks and perturbs objects of interest automatically. Our system uses a static stage where the objects of interest are tracked computationally as they move across the field of view allowing for a large number of simultaneous experiments. An algorithmically determined illumination pattern can be applied anywhere in the field of view with simultaneous imaging and perturbation using different colors of light to enable spatially and temporally structured illumination. Our system consists of a consumer projector, camera, 35-mm camera lens, and a small number of other optical and scaffolding components. The entire apparatus can be assembled for under $4,000. PMID:27642182

  20. Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M.

    Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less

  1. Robust skin color-based moving object detection for video surveillance

    NASA Astrophysics Data System (ADS)

    Kaliraj, Kalirajan; Manimaran, Sudha

    2016-07-01

    Robust skin color-based moving object detection for video surveillance is proposed. The objective of the proposed algorithm is to detect and track the target under complex situations. The proposed framework comprises four stages, which include preprocessing, skin color-based feature detection, feature classification, and target localization and tracking. In the preprocessing stage, the input image frame is smoothed using averaging filter and transformed into YCrCb color space. In skin color detection, skin color regions are detected using Otsu's method of global thresholding. In the feature classification, histograms of both skin and nonskin regions are constructed and the features are classified into foregrounds and backgrounds based on Bayesian skin color classifier. The foreground skin regions are localized by a connected component labeling process. Finally, the localized foreground skin regions are confirmed as a target by verifying the region properties, and nontarget regions are rejected using the Euler method. At last, the target is tracked by enclosing the bounding box around the target region in all video frames. The experiment was conducted on various publicly available data sets and the performance was evaluated with baseline methods. It evidently shows that the proposed algorithm works well against slowly varying illumination, target rotations, scaling, fast, and abrupt motion changes.

  2. An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments

    NASA Astrophysics Data System (ADS)

    Bagheri, Zahra M.; Cazzolato, Benjamin S.; Grainger, Steven; O'Carroll, David C.; Wiederman, Steven D.

    2017-08-01

    Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from ‘small target motion detector’ neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system.

  3. Object classification for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Regensburger, Uwe; Graefe, Volker

    1991-03-01

    Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.

  4. Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion.

    PubMed

    Spering, Miriam; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2011-04-01

    Success of motor behavior often depends on the ability to predict the path of moving objects. Here we asked whether tracking a visual object with smooth pursuit eye movements helps to predict its motion direction. We developed a paradigm, "eye soccer," in which observers had to either track or fixate a visual target (ball) and judge whether it would have hit or missed a stationary vertical line segment (goal). Ball and goal were presented briefly for 100-500 ms and disappeared from the screen together before the perceptual judgment was prompted. In pursuit conditions, the ball moved towards the goal; in fixation conditions, the goal moved towards the stationary ball, resulting in similar retinal stimulation during pursuit and fixation. We also tested the condition in which the goal was fixated and the ball moved. Motion direction prediction was significantly better in pursuit than in fixation trials, regardless of whether ball or goal served as fixation target. In both fixation and pursuit trials, prediction performance was better when eye movements were accurate. Performance also increased with shorter ball-goal distance and longer presentation duration. A longer trajectory did not affect performance. During pursuit, an efference copy signal might provide additional motion information, leading to the advantage in motion prediction.

  5. Image Tracking for the High Similarity Drug Tablets Based on Light Intensity Reflective Energy and Artificial Neural Network

    PubMed Central

    Liang, Zhongwei; Zhou, Liang; Liu, Xiaochu; Wang, Xiaogang

    2014-01-01

    It is obvious that tablet image tracking exerts a notable influence on the efficiency and reliability of high-speed drug mass production, and, simultaneously, it also emerges as a big difficult problem and targeted focus during production monitoring in recent years, due to the high similarity shape and random position distribution of those objectives to be searched for. For the purpose of tracking tablets accurately in random distribution, through using surface fitting approach and transitional vector determination, the calibrated surface of light intensity reflective energy can be established, describing the shape topology and topography details of objective tablet. On this basis, the mathematical properties of these established surfaces have been proposed, and thereafter artificial neural network (ANN) has been employed for classifying those moving targeted tablets by recognizing their different surface properties; therefore, the instantaneous coordinate positions of those drug tablets on one image frame can then be determined. By repeating identical pattern recognition on the next image frame, the real-time movements of objective tablet templates were successfully tracked in sequence. This paper provides reliable references and new research ideas for the real-time objective tracking in the case of drug production practices. PMID:25143781

  6. A novel vehicle tracking algorithm based on mean shift and active contour model in complex environment

    NASA Astrophysics Data System (ADS)

    Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen

    2017-06-01

    Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.

  7. Two-dimensional tracking of a motile micro-organism allowing high-resolution observation with various imaging techniques

    NASA Astrophysics Data System (ADS)

    Oku, H.; Ogawa, N.; Ishikawa, M.; Hashimoto, K.

    2005-03-01

    In this article, a micro-organism tracking system using a high-speed vision system is reported. This system two dimensionally tracks a freely swimming micro-organism within the field of an optical microscope by moving a chamber of target micro-organisms based on high-speed visual feedback. The system we developed could track a paramecium using various imaging techniques, including bright-field illumination, dark-field illumination, and differential interference contrast, at magnifications of 5 times and 20 times. A maximum tracking duration of 300s was demonstrated. Also, the system could track an object with a velocity of up to 35 000μm/s (175diameters/s), which is significantly faster than swimming micro-organisms.

  8. Object tracking via background subtraction for monitoring illegal activity in crossroad

    NASA Astrophysics Data System (ADS)

    Ghimire, Deepak; Jeong, Sunghwan; Park, Sang Hyun; Lee, Joonwhoan

    2016-07-01

    In the field of intelligent transportation system a great number of vision-based techniques have been proposed to prevent pedestrians from being hit by vehicles. This paper presents a system that can perform pedestrian and vehicle detection and monitoring of illegal activity in zebra crossings. In zebra crossing, according to the traffic light status, to fully avoid a collision, a driver or pedestrian should be warned earlier if they possess any illegal moves. In this research, at first, we detect the traffic light status of pedestrian and monitor the crossroad for vehicle pedestrian moves. The background subtraction based object detection and tracking is performed to detect pedestrian and vehicles in crossroads. Shadow removal, blob segmentation, trajectory analysis etc. are used to improve the object detection and classification performance. We demonstrate the experiment in several video sequences which are recorded in different time and environment such as day time and night time, sunny and raining environment. Our experimental results show that such simple and efficient technique can be used successfully as a traffic surveillance system to prevent accidents in zebra crossings.

  9. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.

  10. Tracking control of WMRs on loose soil based on mixed H2/H∞ control with longitudinal slip ratio estimation

    NASA Astrophysics Data System (ADS)

    Gao, Haibo; Chen, Chao; Ding, Liang; Li, Weihua; Yu, Haitao; Xia, Kerui; Liu, Zhen

    2017-11-01

    Wheeled mobile robots (WMRs) often suffer from the longitudinal slipping when moving on the loose soil of the surface of the moon during exploration. Longitudinal slip is the main cause of WMRs' delay in trajectory tracking. In this paper, a nonlinear extended state observer (NESO) is introduced to estimate the longitudinal velocity in order to estimate the slip ratio and the derivative of the loss of velocity which are used in modelled disturbance compensation. Owing to the uncertainty and disturbance caused by estimation errors, a multi-objective controller using the mixed H2/H∞ method is employed to ensure the robust stability and performance of the WMR system. The final inputs of the trajectory tracking consist of the feedforward compensation, compensation for the modelled disturbances and designed multi-objective control inputs. Finally, the simulation results demonstrate the effectiveness of the controller, which exhibits a satisfactory tracking performance.

  11. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.

  12. Reallocating attention during multiple object tracking.

    PubMed

    Ericson, Justin M; Christensen, James C

    2012-07-01

    Wolfe, Place, and Horowitz (Psychonomic Bulletin & Review 14:344-349, 2007) found that participants were relatively unaffected by selecting and deselecting targets while performing a multiple object tracking task, such that maintaining tracking was possible for longer durations than the few seconds typically studied. Though this result was generally consistent with other findings on tracking duration (Franconeri, Jonathon, & Scimeca Psychological Science 21:920-925, 2010), it was inconsistent with research involving cuing paradigms, specifically precues (Pylyshyn & Annan Spatial Vision 19:485-504, 2006). In the present research, we broke down the addition and removal of targets into separate conditions and incorporated a simple performance model to evaluate the costs associated with the selection and deselection of moving targets. Across three experiments, we demonstrated evidence against a cost being associated with any shift in attention, but rather that varying the type of cue used for target deselection produces no additional cost to performance and that hysteresis effects are not induced by a reduction in tracking load.

  13. Position Affects Performance in Multiple-Object Tracking in Rugby Union Players

    PubMed Central

    Martín, Andrés; Sfer, Ana M.; D'Urso Villar, Marcela A.; Barraza, José F.

    2017-01-01

    We report an experiment that examines the performance of rugby union players and a control group composed of graduate student with no sport experience, in a multiple-object tracking task. It compares the ability of 86 high level rugby union players grouped as Backs and Forwards and the control group, to track a subset of randomly moving targets amongst the same number of distractors. Several difficulties were included in the experimental design in order to evaluate possible interactions between the relevant variables. Results show that the performance of the Backs is better than that of the other groups, but the occurrence of interactions precludes an isolated groups analysis. We interpret the results within the framework of visual attention and discuss both, the implications of our results and the practical consequences. PMID:28951725

  14. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  15. Event Management of RFID Data Streams: Fast Moving Consumer Goods Supply Chains

    NASA Astrophysics Data System (ADS)

    Mo, John P. T.; Li, Xue

    Radio Frequency Identification (RFID) is a wireless communication technology that uses radio-frequency waves to transfer information between tagged objects and readers without line of sight. This creates tremendous opportunities for linking real world objects into a world of "Internet of things". Application of RFID to Fast Moving Consumer Goods sector will introduce billions of RFID tags in the world. Almost everything is tagged for tracking and identification purposes. This phenomenon will impose a new challenge not only to the network capacity but also to the scalability of processing of RFID events and data. This chapter uses two national demonstrator projects in Australia as case studies to introduce an event managementframework to process high volume RFID data streams in real time and automatically transform physical RFID observations into business-level events. The model handles various temporal event patterns, both simple and complex, with temporal constraints. The model can be implemented in a data management architecture that allows global RFID item tracking and enables fast, large-scale RFID deployment.

  16. Technical Report of the Use of a Novel Eye Tracking System to Measure Impairment Associated with Mild Traumatic Brain Injury

    PubMed Central

    2017-01-01

    This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older.  PMID:28630809

  17. Technical Report of the Use of a Novel Eye Tracking System to Measure Impairment Associated with Mild Traumatic Brain Injury.

    PubMed

    Kelly, Michael

    2017-05-15

    This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older.

  18. Laser Prevention of Earth Impact Disasters

    NASA Technical Reports Server (NTRS)

    Campbell, J.; Smalley, L.; Boccio, D.; Howell, Joe T. (Technical Monitor)

    2002-01-01

    We now believe that while there are about 2000 earth orbit crossing rocks greater than 1 kilometer in diameter, there may be as many as 100,000 or more objects in the 100m size range. Can anything be done about this fundamental existence question facing us? The answer is a resounding yes! We have the technology to prevent collisions. By using an intelligent combination of Earth and space based sensors coupled with an infrastructure of high-energy laser stations and other secondary mitigation options, we can deflect inbound asteroids, meteoroids, and comets and prevent them from striking the Earth. This can be accomplished by irradiating the surface of an inbound rock with sufficiently intense pulses so that ablation occurs. This ablation acts as a small rocket incrementally changing the shape of the rock's orbit around the Sun. One-kilometer size rocks can be moved sufficiently in a month while smaller rocks may be moved in a shorter time span.We recommend that the World's space objectives be immediately reprioritized to start us moving quickly towards a multiple option defense capability. While lasers should be the primary approach, all mitigation options depend on robust early warning, detection, and tracking resources to find objects sufficiently prior to Earth orbit passage in time to allow mitigation. Infrastructure options should include ground, LEO, GEO, Lunar, and libration point laser and sensor stations for providing early warning, tracking, and deflection. Other options should include space interceptors that will carry both laser and nuclear ablators for close range work. Response options must be developed to deal with the consequences of an impact should we move too slowly.

  19. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds

    PubMed Central

    Howe, Piers D. L.

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources. PMID:28410383

  20. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds.

    PubMed

    Lapierre, Mark D; Cropper, Simon J; Howe, Piers D L

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources.

  1. An analog retina model for detecting dim moving objects against a bright moving background

    NASA Technical Reports Server (NTRS)

    Searfus, R. M.; Colvin, M. E.; Eeckman, F. H.; Teeters, J. L.; Axelrod, T. S.

    1991-01-01

    We are interested in applications that require the ability to track a dim target against a bright, moving background. Since the target signal will be less than or comparable to the variations in the background signal intensity, sophisticated techniques must be employed to detect the target. We present an analog retina model that adapts to the motion of the background in order to enhance targets that have a velocity difference with respect to the background. Computer simulation results and our preliminary concept of an analog 'Z' focal plane implementation are also presented.

  2. Face landmark point tracking using LK pyramid optical flow

    NASA Astrophysics Data System (ADS)

    Zhang, Gang; Tang, Sikan; Li, Jiaquan

    2018-04-01

    LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.

  3. Segmentation and tracking in echocardiographic sequences: active contours guided by optical flow estimates

    NASA Technical Reports Server (NTRS)

    Mikic, I.; Krucinski, S.; Thomas, J. D.

    1998-01-01

    This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.

  4. Grasping rigid objects in zero-g

    NASA Astrophysics Data System (ADS)

    Anderson, Greg D.

    1993-12-01

    The extra vehicular activity helper/retriever (EVAHR) is a prototype for an autonomous free- flying robotic astronaut helper. The ability to grasp a moving object is a fundamental skill required for any autonomous free-flyer. This paper discusses an algorithm that couples resolved acceleration control with potential field based obstacle avoidance to enable a manipulator to track and capture a rigid object in (imperfect) zero-g while avoiding joint limits, singular configurations, and unintentional impacts between the manipulator and the environment.

  5. Learning an intrinsic-variable preserving manifold for dynamic visual tracking.

    PubMed

    Qiao, Hong; Zhang, Peng; Zhang, Bo; Zheng, Suiwu

    2010-06-01

    Manifold learning is a hot topic in the field of computer science, particularly since nonlinear dimensionality reduction based on manifold learning was proposed in Science in 2000. The work has achieved great success. The main purpose of current manifold-learning approaches is to search for independent intrinsic variables underlying high dimensional inputs which lie on a low dimensional manifold. In this paper, a new manifold is built up in the training step of the process, on which the input training samples are set to be close to each other if the values of their intrinsic variables are close to each other. Then, the process of dimensionality reduction is transformed into a procedure of preserving the continuity of the intrinsic variables. By utilizing the new manifold, the dynamic tracking of a human who can move and rotate freely is achieved. From the theoretical point of view, it is the first approach to transfer the manifold-learning framework to dynamic tracking. From the application point of view, a new and low dimensional feature for visual tracking is obtained and successfully applied to the real-time tracking of a free-moving object from a dynamic vision system. Experimental results from a dynamic tracking system which is mounted on a dynamic robot validate the effectiveness of the new algorithm.

  6. Use of inertial properties to orient tomatoes

    USDA-ARS?s Scientific Manuscript database

    Recent theoretical and experimental results have demonstrated that it is possible to orient quasi-round objects such as apples by taking advantage of inertial-effects during rotation. In practice, an apple rolled down a track consisting of two parallel rails tends to move to an orientation where the...

  7. Interpretation of the function of the striate cortex

    NASA Astrophysics Data System (ADS)

    Garner, Bernardette M.; Paplinski, Andrew P.

    2000-04-01

    Biological neural networks do not require retraining every time objects move in the visual field. Conventional computer neural networks do not share this shift-invariance. The brain compensates for movements in the head, body, eyes and objects by allowing the sensory data to be tracked across the visual field. The neurons in the striate cortex respond to objects moving across the field of vision as is seen in many experiments. It is proposed, that the neurons in the striate cortex allow continuous angle changes needed to compensate for changes in orientation of the head, eyes and the motion of objects in the field of vision. It is hypothesized that the neurons in the striate cortex form a system that allows for the translation, some rotation and scaling of objects and provides a continuity of objects as they move relative to other objects. The neurons in the striate cortex respond to features which are fundamental to sight, such as orientation of lines, direction of motion, color and contrast. The neurons that respond to these features are arranged on the cortex in a way that depends on the features they are responding to and on the area of the retina from which they receive their inputs.

  8. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  9. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.

  10. Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images

    NASA Astrophysics Data System (ADS)

    Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian

    2015-12-01

    With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.

  11. KSC-2009-4622

    NASA Image and Video Library

    2009-07-23

    CAPE CANAVERAL, Fla. – In the Astrotech payload processing facility in Titusville, Fla. , the STSS Demonstrator SV-1 spacecraft is being moved to a stand. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Tim Jacobs (Approved for Public Release 09-MDA-4800 [30 July 09] )

  12. KSC-2009-3668

    NASA Image and Video Library

    2009-05-01

    CAPE CANAVERAL, Fla. – The STSS Demonstrator SV-2spacecraft is moved inside a building at the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Jack Pfaller (Approved for Public Release 09-MDA-4616 [27 May 09])

  13. KSC-2009-4618

    NASA Image and Video Library

    2009-06-25

    CAPE CANAVERAL, Fla. – The SV-1 cargo of the STSS Demonstrator spacecraft is moved into the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Kim Shiflett (Approved for Public Release 09-MDA-4804 [4 Aug 09] )

  14. KSC-2009-4624

    NASA Image and Video Library

    2009-07-23

    CAPE CANAVERAL, Fla. – In the Astrotech payload processing facility in Titusville, Fla. , the STSS Demonstrator SV-1 spacecraft is moved toward the orbital insertion system. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Tim Jacobs (Approved for Public Release 09-MDA-4800 [30 July 09] )

  15. Motion estimation of subcellular structures from fluorescence microscopy images.

    PubMed

    Vallmitjana, A; Civera-Tregon, A; Hoenicka, J; Palau, F; Benitez, R

    2017-07-01

    We present an automatic image processing framework to study moving intracellular structures from live cell fluorescence microscopy. The system includes the identification of static and dynamic structures from time-lapse images using data clustering as well as the identification of the trajectory of moving objects with a probabilistic tracking algorithm. The method has been successfully applied to study mitochondrial movement in neurons. The approach provides excellent performance under different experimental conditions and is robust to common sources of noise including experimental, molecular and biological fluctuations.

  16. Sustained attention to objects' motion sharpens position representations: Attention to changing position and attention to motion are distinct.

    PubMed

    Howard, Christina J; Rollings, Victoria; Hardie, Amy

    2017-06-01

    In tasks where people monitor moving objects, such the multiple object tracking task (MOT), observers attempt to keep track of targets as they move amongst distracters. The literature is mixed as to whether observers make use of motion information to facilitate performance. We sought to address this by two means: first by superimposing arrows on objects which varied in their informativeness about motion direction and second by asking observers to attend to motion direction. Using a position monitoring task, we calculated mean error magnitudes as a measure of the precision with which target positions are represented. We also calculated perceptual lags versus extrapolated reports, which are the times at which positions of targets best match position reports. We find that the presence of motion information in the form of superimposed arrows made no difference to position report precision nor perceptual lag. However, when we explicitly instructed observers to attend to motion, we saw facilitatory effects on position reports and in some cases reports that best matched extrapolated rather than lagging positions for small set sizes. The results indicate that attention to changing positions does not automatically recruit attention to motion, showing a dissociation between sustained attention to changing positions and attention to motion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Motion detection, novelty filtering, and target tracking using an interferometric technique with GaAs phase conjugate mirror

    NASA Technical Reports Server (NTRS)

    Cheng, Li-Jen (Inventor); Liu, Tsuen-Hsi (Inventor)

    1991-01-01

    A method and apparatus for detecting and tracking moving objects in a noise environment cluttered with fast- and slow-moving objects and other time-varying background. A pair of phase conjugate light beams carrying the same spatial information commonly cancel each other out through an image subtraction process in a phase conjugate interferometer, wherein gratings are formed in a fast photorefractive phase conjugate mirror material. In the steady state, there is no output. When the optical path of one of the two phase conjugate beams is suddenly changed, the return beam loses its phase conjugate nature and the interferometer is out of balance, resulting in an observable output. The observable output lasts until the phase conjugate nature of the beam has recovered. The observable time of the output signal is roughly equal to the formation time of the grating. If the optical path changing time is slower than the formation time, the change of optical path becomes unobservable, because the index grating can follow the change. Thus, objects traveling at speeds which result in a path changing time which is slower than the formation time are not observable and do not clutter the output image view.

  18. Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces.

    PubMed

    Yoshida, Takero; Rheem, Chang-Kyu

    2015-06-10

    A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed.

  19. Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces

    PubMed Central

    Yoshida, Takero; Rheem, Chang-Kyu

    2015-01-01

    A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed. PMID:26067197

  20. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  1. Detection and tracking of drones using advanced acoustic cameras

    NASA Astrophysics Data System (ADS)

    Busset, Joël.; Perrodin, Florian; Wellig, Peter; Ott, Beat; Heutschi, Kurt; Rühl, Torben; Nussbaumer, Thomas

    2015-10-01

    Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.

  2. Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

    NASA Astrophysics Data System (ADS)

    Zhang, Wenchang; Mei, Jiangping; Ding, Yabin

    In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.

  3. Error analysis of motion correction method for laser scanning of moving objects

    NASA Astrophysics Data System (ADS)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  4. Simultaneous 3D-vibration measurement using a single laser beam device

    NASA Astrophysics Data System (ADS)

    Brecher, Christian; Guralnik, Alexander; Baümler, Stephan

    2012-06-01

    Today's commercial solutions for vibration measurement and modal analysis are 3D-scanning laser doppler vibrometers, mainly used for open surfaces in the automotive and aerospace industries and the classic three-axial accelerometers in civil engineering, for most industrial applications in manufacturing environments, and particularly for partially closed structures. This paper presents a novel measurement approach using a single laser beam device and optical reflectors to simultaneously perform 3D-dynamic measurement as well as geometry measurement of the investigated object. We show the application of this so called laser tracker for modal testing of structures on a mechanical manufacturing shop floor. A holistic measurement method is developed containing manual reflector placement, semi-automated geometric modeling of investigated objects and fully automated vibration measurement up to 1000 Hz and down to few microns amplitude. Additionally the fast set up dynamic measurement of moving objects using a tracking technique is presented that only uses the device's own functionalities and does neither require a predefined moving path of the target nor an electronic synchronization to the moving object.

  5. Detection and tracking of a moving target using SAR images with the particle filter-based track-before-detect algorithm.

    PubMed

    Gao, Han; Li, Jingwen

    2014-06-19

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.

  6. Detection and Tracking of a Moving Target Using SAR Images with the Particle Filter-Based Track-Before-Detect Algorithm

    PubMed Central

    Gao, Han; Li, Jingwen

    2014-01-01

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640

  7. Hyperspectral Imager-Tracker

    NASA Technical Reports Server (NTRS)

    Agurok, Llya

    2013-01-01

    The Hyperspectral Imager-Tracker (HIT) is a technique for visualization and tracking of low-contrast, fast-moving objects. The HIT architecture is based on an innovative and only recently developed concept in imaging optics. This innovative architecture will give the Light Prescriptions Innovators (LPI) HIT the possibility of simultaneously collecting the spectral band images (hyperspectral cube), IR images, and to operate with high-light-gathering power and high magnification for multiple fast- moving objects. Adaptive Spectral Filtering algorithms will efficiently increase the contrast of low-contrast scenes. The most hazardous parts of a space mission are the first stage of a launch and the last 10 kilometers of the landing trajectory. In general, a close watch on spacecraft operation is required at distances up to 70 km. Tracking at such distances is usually associated with the use of radar, but its milliradian angular resolution translates to 100- m spatial resolution at 70-km distance. With sufficient power, radar can track a spacecraft as a whole object, but will not provide detail in the case of an accident, particularly for small debris in the onemeter range, which can only be achieved optically. It will be important to track the debris, which could disintegrate further into more debris, all the way to the ground. Such fragmentation could cause ballistic predictions, based on observations using high-resolution but narrow-field optics for only the first few seconds of the event, to be inaccurate. No optical imager architecture exists to satisfy NASA requirements. The HIT was developed for space vehicle tracking, in-flight inspection, and in the case of an accident, a detailed recording of the event. The system is a combination of five subsystems: (1) a roving fovea telescope with a wide 30 field of regard; (2) narrow, high-resolution fovea field optics; (3) a Coude optics system for telescope output beam stabilization; (4) a hyperspectral-mutispectral imaging assembly; and (5) image analysis software with effective adaptive spectral filtering algorithm for real-time contrast enhancement.

  8. Tracking with the mind's eye

    NASA Technical Reports Server (NTRS)

    Krauzlis, R. J.; Stone, L. S.

    1999-01-01

    The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.

  9. Inside the Black Box

    ERIC Educational Resources Information Center

    Kao, Yvonne S.; Cina, Anthony; Gimm, J. Aura

    2006-01-01

    Scientists often have to observe and study surfaces that are impossible or impractical to see directly, such as the ocean floor or the atomic surfaces of objects. Early in the history of oceanography scientists dropped weighted cables to the bottom of the ocean. By moving across the ocean at regular intervals and keeping track of how deep the…

  10. TrackTable Trajectory Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew T.

    Tracktable is designed for analysis and rendering of the trajectories of moving objects such as planes, trains, automobiles and ships. Its purpose is to operate on large sets of trajectories (millions) to help a user detect, analyze and display patterns. It will also be used to disseminate trajectory research results from Sandia's PANTHER Grand Challenge LDRD.

  11. Foil Artists

    ERIC Educational Resources Information Center

    Szekely, George

    2010-01-01

    Foil can be shaped into almost anything--it is the all-purpose material for children's art. Foil is a unique drawing surface. It reflects, distorts and plays with light and imagery as young artists draw over it. Foil permits quick impressions of a model or object to be sketched. Foil allows artists to track their drawing moves, seeing the action…

  12. Detection of a faint fast-moving near-Earth asteroid using the synthetic tracking technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Chengxing; Shao, Michael; Nemati, Bijan

    We report a detection of a faint near-Earth asteroid (NEA) using our synthetic tracking technique and the CHIMERA instrument on the Palomar 200 inch telescope. With an apparent magnitude of 23 (H = 29, assuming detection at 20 lunar distances), the asteroid was moving at 6.°32 day{sup –1} and was detected at a signal-to-noise ratio (S/N) of 15 using 30 s of data taken at a 16.7 Hz frame rate. The detection was confirmed by a second observation 77 minutes later at the same S/N. Because of its high proper motion, the NEA moved 7 arcsec over the 30 smore » of observation. Synthetic tracking avoided image degradation due to trailing loss that affects conventional techniques relying on 30 s exposures; the trailing loss would have degraded the surface brightness of the NEA image on the CCD down to an approximate magnitude of 25 making the object undetectable. This detection was a result of our 12 hr blind search conducted on the Palomar 200 inch telescope over two nights, scanning twice over six (5.°3 × 0.°046) fields. Detecting only one asteroid is consistent with Harris's estimates for the distribution of the asteroid population, which was used to predict a detection of 1.2 NEAs in the H-magnitude range 28-31 for the two nights. The experimental design, data analysis methods, and algorithms are presented. We also demonstrate milliarcsecond-level astrometry using observations of two known bright asteroids on the same system with synthetic tracking. We conclude by discussing strategies for scheduling observations to detect and characterize small and fast-moving NEAs using the new technique.« less

  13. KSC-2009-4614

    NASA Image and Video Library

    2009-06-25

    CAPE CANAVERAL, Fla. – At NASA Kennedy Space Center's Shuttle Landing Facility, the SV-1 cargo of the STSS Demonstrator spacecraft is moved onto a flatbed truck for transfer to the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Kim Shiflett (Approved for Public Release 09-MDA-4804 [4 Aug 09] )

  14. KSC-2009-4615

    NASA Image and Video Library

    2009-06-25

    CAPE CANAVERAL, Fla. – At NASA Kennedy Space Center's Shuttle Landing Facility, the flatbed truck with the SV-1 cargo of the STSS Demonstrator spacecraft begins moving to the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Kim Shiflett (Approved for Public Release 09-MDA-4804 [4 Aug 09] )

  15. Vision-based object detection and recognition system for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Liu, Henry X.; Martono, Wilfung

    1999-01-01

    Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.

  16. Mid-course multi-target tracking using continuous representation

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Toomarian, Nikzad

    1991-01-01

    The thrust of this paper is to present a new approach to multi-target tracking for the mid-course stage of the Strategic Defense Initiative (SDI). This approach is based upon a continuum representation of a cluster of flying objects. We assume that the velocities of the flying objects can be embedded into a smooth velocity field. This assumption is based upon the impossibility of encounters in a high density cluster between the flying objects. Therefore, the problem is reduced to an identification of a moving continuum based upon consecutive time frame observations. In contradistinction to the previous approaches, here each target is considered as a center of a small continuous neighborhood subjected to a local-affine transformation, and therefore, the target trajectories do not mix. Obviously, their mixture in plane of sensor view is apparent. The approach is illustrated by an example.

  17. Flower tracking in hawkmoths: behavior and energetics.

    PubMed

    Sprayberry, Jordanna D H; Daniel, Thomas L

    2007-01-01

    As hovering feeders, hawkmoths cope with flower motions by tracking those motions to maintain contact with the nectary. This study examined the tracking, feeding and energetic performance of Manduca sexta feeding from flowers moving at varied frequencies and in different directions. In general we found that tracking performance decreased as frequency increased; M. sexta tracked flowers moving at 1 Hz best. While feeding rates were highest for stationary flowers, they remained relatively constant for all tested frequencies of flower motion. Calculations of net energy gain showed that energy expenditure to track flowers is minimal compared to energy intake; therefore, patterns of net energy gain mimicked patterns of feeding rate. The direction effects of flower motion were greater than the frequency effects. While M. sexta appeared equally capable of tracking flowers moving in the horizontal and vertical motion axes, they demonstrated poor ability to track flowers moving in the looming axis. Additionally, both feeding rates and net energy gain were lower for looming axis flower motions.

  18. Encrypted Three-dimensional Dynamic Imaging using Snapshot Time-of-flight Compressed Ultrafast Photography

    PubMed Central

    Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.

    2015-01-01

    Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834

  19. KSC-2009-5060

    NASA Image and Video Library

    2009-08-22

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is moved toward the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft, at left. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett

  20. KSC-2009-5061

    NASA Image and Video Library

    2009-08-22

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., the upper segment of the transportation canister is moved toward the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, spacecraft, at bottom left. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Kim Shiflett

  1. Tracking the impact of depression in a perspective-taking task.

    PubMed

    Ferguson, Heather J; Cane, James

    2017-11-01

    Research has identified impairments in Theory of Mind (ToM) abilities in depressed patients, particularly in relation to tasks involving empathetic responses and belief reasoning. We aimed to build on this research by exploring the relationship between depressed mood and cognitive ToM, specifically visual perspective-taking ability. High and low depressed participants were eye-tracked as they completed a perspective-taking task, in which they followed the instructions of a 'director' to move target objects (e.g. a "teapot with spots on") around a grid, in the presence of a temporarily-ambiguous competitor object (e.g. a "teapot with stars on"). Importantly, some of the objects in the grid were occluded from the director's (but not the participant's) view. Results revealed no group-based difference in participants' ability to use perspective cues to identify the target object. All participants were faster to select the target object when the competitor was only available to the participant, compared to when the competitor was mutually available to the participant and director. Eye-tracking measures supported this pattern, revealing that perspective directed participants' visual search immediately upon hearing the ambiguous object's name (e.g. "teapot"). We discuss how these results fit with previous studies that have shown a negative relationship between depression and ToM.

  2. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  3. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    PubMed Central

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  4. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    PubMed

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  5. Motion detection, novelty filtering, and target tracking using an interferometric technique with a GaAs phase conjugate mirror

    NASA Technical Reports Server (NTRS)

    Cheng, Li-Jen (Inventor); Liu, Tsuen-Hsi (Inventor)

    1990-01-01

    A method and apparatus is disclosed for detecting and tracking moving objects in a noise environment cluttered with fast-and slow-moving objects and other time-varying background. A pair of phase conjugate light beams carrying the same spatial information commonly cancel each other out through an image subtraction process in a phase conjugate interferometer, wherein gratings are formed in a fast photo-refractive phase conjugate mirror material. In the steady state, there is no output. When the optical path of one of the two phase conjugate beams is suddenly changed, the return beam loses its phase conjugate nature and the inter-ferometer is out of balance, resulting in an observable output. The observable output lasts until the phase conjugate nature of the beam has recovered. The observable time of the output signal is roughly equal to the formation time of the grating. If the optical path changing time is slower than the formation time, the change of optical path becomes unobservable, because the index grating can follow the change. Thus, objects traveling at speeds which result in a path changing time which is slower than the formation time are not observable and do not clutter the output image view.

  6. Gaia-GBOT asteroid finding programme (gbot.obspm.fr)

    NASA Astrophysics Data System (ADS)

    Bouquillon, Sébastien; Altmann, Martin; Taris, Francois; Barache, Christophe; Carlucci, Teddy; Tanga, Paolo; Thuillot, William; Marchant, Jon; Steele, Iain; Lister, Tim; Berthier, Jerome; Carry, Benoit; David, Pedro; Cellino, Alberto; Hestroffer, Daniel J.; Andrei, Alexandre Humberto; Smart, Ricky

    2016-10-01

    The Ground Based Optical Tracking group (GBOT) consists of about ten scientists involved in the Gaia mission by ESA. Its main task is the optical tracking of the Gaia satellite itself [1]. This novel tracking method in addition to radiometric standard ones is necessary to ensure that the Gaia mission goal in terms of astrometric precision level is reached for all objects. This optical tracking is based on daily observations performed throughout the mission by using the optical CCDs of ESO's VST in Chile, of Liverpool Telescope in La Palma and of the two LCOGT's Faulkes Telescopes in Hawaii and Australia. Each night, GBOT attempts to obtain a sequence of frames covering a 20 min total period and close to Gaia meridian transit time. In each sequence, Gaia is seen as a faint moving object (Rmag ~ 21, speed > 1"/min) and its daily astrometric accuracy has to be better than 0.02" to meet the Gaia mission requirements. The GBOT Astrometric Reduction Pipeline (GARP) [2] has been specifically developed to reach this precision.More recently, a secondary task has been assigned to GBOT which consists detecting and analysing Solar System Objects (SSOs) serendipitously recorded in the GBOT data. Indeed, since Gaia oscillates around the Sun-Earth L2 point, the fields of GBOT observations are near the Ecliptic and roughly located opposite to the Sun which is advantageous for SSO observations and studies. In particular, these SSO data can potentially be very useful to help in the determination of their absolute magnitudes, with important applications to the scientific exploitation of the WISE and Gaia missions. For these reasons, an automatic SSO detection system has been created to identify moving objects in GBOT sequences of observations. Since the beginning of 2015, this SSO detection system, added to GARP for performing high precision astrometry for SSOs, is fully operational. To this date, around 9000 asteroids have been detected. The mean delay between the time of observation and the submission of the SSO reduction results to the MPC is less than 12 hours allowing rapid follow up of new objects.[1] Altmann et al. 2014, SPIE, 9149.[2] Bouquillon et al. 2014, SPIE, 9152.

  7. Automated segmentation and tracking of non-rigid objects in time-lapse microscopy videos of polymorphonuclear neutrophils.

    PubMed

    Brandes, Susanne; Mokhtari, Zeinab; Essig, Fabian; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-02-01

    Time-lapse microscopy is an important technique to study the dynamics of various biological processes. The labor-intensive manual analysis of microscopy videos is increasingly replaced by automated segmentation and tracking methods. These methods are often limited to certain cell morphologies and/or cell stainings. In this paper, we present an automated segmentation and tracking framework that does not have these restrictions. In particular, our framework handles highly variable cell shapes and does not rely on any cell stainings. Our segmentation approach is based on a combination of spatial and temporal image variations to detect moving cells in microscopy videos. This method yields a sensitivity of 99% and a precision of 95% in object detection. The tracking of cells consists of different steps, starting from single-cell tracking based on a nearest-neighbor-approach, detection of cell-cell interactions and splitting of cell clusters, and finally combining tracklets using methods from graph theory. The segmentation and tracking framework was applied to synthetic as well as experimental datasets with varying cell densities implying different numbers of cell-cell interactions. We established a validation framework to measure the performance of our tracking technique. The cell tracking accuracy was found to be >99% for all datasets indicating a high accuracy for connecting the detected cells between different time points. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. "SMALLab": Virtual Geology Studies Using Embodied Learning with Motion, Sound, and Graphics

    ERIC Educational Resources Information Center

    Johnson-Glenberg, Mina C.; Birchfield, David; Usyal, Sibel

    2009-01-01

    We present a new and innovative interface that allows the learner's body to move freely in a multimodal learning environment. The Situated Multimedia Arts Learning Laboratory ("SMALLab") uses 3D object tracking, real time graphics, and surround-sound to enhance embodied learning. Our hypothesis is that optimal learning and retention occur when…

  9. Digital-Difference Processing For Collision Avoidance.

    NASA Technical Reports Server (NTRS)

    Shores, Paul; Lichtenberg, Chris; Kobayashi, Herbert S.; Cunningham, Allen R.

    1988-01-01

    Digital system for automotive crash avoidance measures and displays difference in frequency between two sinusoidal input signals of slightly different frequencies. Designed for use with Doppler radars. Characterized as digital mixer coupled to frequency counter measuring difference frequency in mixer output. Technique determines target path mathematically. Used for tracking cars, missiles, bullets, baseballs, and other fast-moving objects.

  10. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  11. Vehicle Detection with Occlusion Handling, Tracking, and OC-SVM Classification: A High Performance Vision-Based System

    PubMed Central

    Velazquez-Pupo, Roxana; Sierra-Romero, Alberto; Torres-Roman, Deni; Shkvarko, Yuriy V.; Romero-Delgado, Misael

    2018-01-01

    This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles. PMID:29382078

  12. An automated data exploitation system for airborne sensors

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    Advanced wide area persistent surveillance (WAPS) sensor systems on manned or unmanned airborne vehicles are essential for wide-area urban security monitoring in order to protect our people and our warfighter from terrorist attacks. Currently, human (imagery) analysts process huge data collections from full motion video (FMV) for data exploitation and analysis (real-time and forensic), providing slow and inaccurate results. An Automated Data Exploitation System (ADES) is urgently needed. In this paper, we present a recently developed ADES for airborne vehicles under heavy urban background clutter conditions. This system includes four processes: (1) fast image registration, stabilization, and mosaicking; (2) advanced non-linear morphological moving target detection; (3) robust multiple target (vehicles, dismounts, and human) tracking (up to 100 target tracks); and (4) moving or static target/object recognition (super-resolution). Test results with real FMV data indicate that our ADES can reliably detect, track, and recognize multiple vehicles under heavy urban background clutters. Furthermore, our example shows that ADES as a baseline platform can provide capability for vehicle abnormal behavior detection to help imagery analysts quickly trace down potential threats and crimes.

  13. Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter

    NASA Astrophysics Data System (ADS)

    Murphy, T.; Holzinger, M.

    2016-09-01

    Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.

  14. Effects of railway track design on the expected degradation: Parametric study on energy dissipation

    NASA Astrophysics Data System (ADS)

    Sadri, Mehran; Steenbergen, Michaël

    2018-04-01

    This paper studies the effect of railway track design parameters on the expected long-term degradation of track geometry. The study assumes a geometrically perfect and straight track along with spatial invariability, except for the presence of discrete sleepers. A frequency-domain two-layer model is used of a discretely supported rail coupled with a moving unsprung mass. The susceptibility of the track to degradation is objectively quantified by calculating the mechanical energy dissipated in the substructure under a moving train axle for variations of different track parameters. Results show that, apart from the operational train speed, the ballast/substructure stiffness is the most significant parameter influencing energy dissipation. Generally, the degradation increases with the train speed and with softer substructures. However, stiff subgrades appear more sensitive to particular train velocities, in a regime which is mostly relevant for conventional trains (100-200 km/h) and less for high-speed operation, where a stiff subgrade is always favorable and can reduce the sensitivity to degradation substantially, with roughly a factor up to 7. Also railpad stiffness, sleeper distance and rail cross-sectional properties are found to have considerable effect, with higher expected degradation rates for increasing railpad stiffness, increasing sleeper distance and decreasing rail profile bending stiffness. Unsprung vehicle mass and sleeper mass have no significant influence, however, only against the background of the assumption of an idealized (invariant and straight) track. Apart from dissipated mechanical energy, the suitability of the dynamic track stiffness is explored as an engineering parameter to assess the sensitivity to degradation. It is found that this quantity is inappropriate to assess the design of an idealized track.

  15. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  16. Three-dimensional microscope tracking system using the astigmatic lens method and a profile sensor

    NASA Astrophysics Data System (ADS)

    Kibata, Hiroki; Ishii, Katsuhiro

    2018-03-01

    We developed a three-dimensional microscope tracking system using the astigmatic lens method and a profile sensor, which provides three-dimensional position detection over a wide range at the rate of 3.2 kHz. First, we confirmed the range of target detection of the developed system, where the range of target detection was shown to be ± 90 µm in the horizontal plane and ± 9 µm in the vertical plane for a 10× objective lens. Next, we attempted to track a motion-controlled target. The developed system kept the target at the center of the field of view and in focus up to a target speed of 50 µm/s for a 20× objective lens. Finally, we tracked a freely moving target. We successfully demonstrated the tracking of a 10-µm-diameter polystyrene bead suspended in water for 40 min. The target was kept in the range of approximately 4.9 µm around the center of the field of view. In addition, the vertical direction was maintained in the range of ± 0.84 µm, which was sufficiently within the depth of focus.

  17. Steering Angle Control of Car for Dubins Path-tracking Using Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Kusuma Rahma Putri, Dian; Subchan; Asfihani, Tahiyatul

    2018-03-01

    Car as one of transportation is inseparable from technological developments. About ten years, there are a lot of research and development on lane keeping system(LKS) which is a system that automaticaly controls the steering to keep the vehicle especially car always on track. This system can be developed for unmanned cars. Unmanned system car requires navigation, guidance and control which is able to direct the vehicle to move toward the desired path. The guidance system is represented by using Dubins-Path that will be controlled by using Model Predictive Control. The control objective is to keep the car’s movement that represented by dinamic lateral motion model so car can move according to the path appropriately. The simulation control on the four types of trajectories that generate the value for steering angle and steering angle changes are at the specified interval.

  18. Dissociable Frontal Controls during Visible and Memory-guided Eye-Tracking of Moving Targets

    PubMed Central

    Ding, Jinhong; Powell, David; Jiang, Yang

    2009-01-01

    When tracking visible or occluded moving targets, several frontal regions including the frontal eye fields (FEF), dorsal-lateral prefrontal cortex (DLPFC), and Anterior Cingulate Cortex (ACC) are involved in smooth pursuit eye movements (SPEM). To investigate how these areas play different roles in predicting future locations of moving targets, twelve healthy college students participated in a smooth pursuit task of visual and occluded targets. Their eye movements and brain responses measured by event-related functional MRI were simultaneously recorded. Our results show that different visual cues resulted in time discrepancies between physical and estimated pursuit time only when the moving dot was occluded. Visible phase velocity gain was higher than that of occlusion phase. We found bilateral FEF association with eye-movement whether moving targets are visible or occluded. However, the DLPFC and ACC showed increased activity when tracking and predicting locations of occluded moving targets, and were suppressed during smooth pursuit of visible targets. When visual cues were increasingly available, less activation in the DLPFC and the ACC was observed. Additionally, there was a significant hemisphere effect in DLPFC, where right DLPFC showed significantly increased responses over left when pursuing occluded moving targets. Correlation results revealed that DLPFC, the right DLPFC in particular, communicates more with FEF during tracking of occluded moving targets (from memory). The ACC modulates FEF more during tracking of visible targets (likely related to visual attention). Our results suggest that DLPFC and ACC modulate FEF and cortical networks differentially during visible and memory-guided eye tracking of moving targets. PMID:19434603

  19. A new user-assisted segmentation and tracking technique for an object-based video editing system

    NASA Astrophysics Data System (ADS)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  20. More About The Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1996-01-01

    Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.

  1. KSC-2009-2716

    NASA Image and Video Library

    2009-04-16

    CAPE CANAVERAL, Fla. – On Launch Complex 17-B at Cape Canaveral Air Force Station, the mobile service tower at right moves toward the first stage of the Delta II rocket. The boosters in the tower will be attached to the rocket for launch of the STSS Demonstrator spacecraft. The STSS Demonstrators is a midcourse tracking technology demonstrator and is part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency on July 29. Photo credit: NASA/Kim Shiflett

  2. KSC-2009-2717

    NASA Image and Video Library

    2009-04-16

    CAPE CANAVERAL, Fla. – On Launch Complex 17-B at Cape Canaveral Air Force Station, the mobile service tower at right moves closer to the first stage of the Delta II rocket. The boosters in the tower will be attached to the rocket for launch of the STSS Demonstrator spacecraft. The STSS Demonstrators is a midcourse tracking technology demonstrator and is part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency on July 29. Photo credit: NASA/Kim Shiflett

  3. Combined magnetic resonance imaging and ultrasound echography guidance for motion compensated HIFU interventions

    NASA Astrophysics Data System (ADS)

    Ries, Mario; de Senneville, Baudouin Denis; Regard, Yvan; Moonen, Chrit

    2012-11-01

    The objective of this study is to evaluate the feasibility to integrate ultrasound echography as an additional imaging modality for continuous target tracking, while performing simultaneously real-time MR- thermometry to guide a High Intensity Focused Ultrasound (HIFU) ablation. Experiments on a moving phantom were performed with MRI-guided HIFU during continuous ultrasound echography. Real-time US echography-based target tracking during MR-guided HIFU heating was performed with heated area dimensions similar to those obtained for a static target. The combination of both imaging modalities shows great potential for real-time beam steering and MR-thermometry.

  4. KSC-2009-3662

    NASA Image and Video Library

    2009-05-01

    CAPE CANAVERAL, Fla. – At NASA Kennedy Space Center's Shuttle Landing Facility, the shipping container with the STSS Demonstrator SV-2spacecraft moves out of the U.S. Air Force C-17 aircraft. The spacecraft will be transferred to the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Jack Pfaller (Approved for Public Release 09-MDA-4616 [27 May 09])

  5. KSC-2009-2666

    NASA Image and Video Library

    2009-04-15

    CAPE CANAVERAL, Fla. – On Cape Canaveral Air Force Station's Launch Complex 17-B in Florida, the first stage of a Delta II rocket is raised to vertical before it can be moved into the mobile service tower for processing. The rocket is the launch vehicle for the STSS Demonstrators Program. STSS Demonstrators Program is a midcourse tracking technology demonstrator and is part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency on July 29. Photo credit: NASA/Jack Pfaller

  6. KSC-2009-4612

    NASA Image and Video Library

    2009-06-25

    CAPE CANAVERAL, Fla. – At NASA Kennedy Space Center's Shuttle Landing Facility, the SV-1 cargo of the STSS Demonstrator spacecraft moves out of the U.S. Air Force C-17. The cargo will be transferred to the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Kim Shiflett (Approved for Public Release 09-MDA-4804 [4 Aug 09] )

  7. KSC-2009-2665

    NASA Image and Video Library

    2009-04-15

    CAPE CANAVERAL, Fla. – On Cape Canaveral Air Force Station's Launch Complex 17-B in Florida, the first stage of a Delta II rocket is raised to vertical before it can be moved into the mobile service tower for processing. The rocket is the launch vehicle for the STSS Demonstrators Program. STSS Demonstrators Program is a midcourse tracking technology demonstrator and is part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency on July 29. Photo credit: NASA/Jack Pfaller

  8. KSC-2009-4613

    NASA Image and Video Library

    2009-06-25

    CAPE CANAVERAL, Fla. – At NASA Kennedy Space Center's Shuttle Landing Facility, the SV-1 cargo of the STSS Demonstrator spacecraft moves out of the U.S. Air Force C-17. The cargo will be transferred to the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Kim Shiflett (Approved for Public Release 09-MDA-4804 [4 Aug 09] )

  9. Automated multiple target detection and tracking in UAV videos

    NASA Astrophysics Data System (ADS)

    Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie

    2010-04-01

    In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.

  10. Image computing techniques to extrapolate data for dust tracking in case of an experimental accident simulation in a nuclear fusion plant.

    PubMed

    Camplani, M; Malizia, A; Gelfusa, M; Barbato, F; Antonelli, L; Poggi, L A; Ciparisse, J F; Salgado, L; Richetta, M; Gaudio, P

    2016-01-01

    In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.

  11. Millimeter wave radar system on a rotating platform for combined search and track functionality with SAR imaging

    NASA Astrophysics Data System (ADS)

    Aulenbacher, Uwe; Rech, Klaus; Sedlmeier, Johannes; Pratisto, Hans; Wellig, Peter

    2014-10-01

    Ground based millimeter wave radar sensors offer the potential for a weather-independent automatic ground surveillance at day and night, e.g. for camp protection applications. The basic principle and the experimental verification of a radar system concept is described, which by means of an extreme off-axis positioning of the antenna(s) combines azimuthal mechanical beam steering with the formation of a circular-arc shaped synthetic aperture (SA). In automatic ground surveillance the function of search and detection of moving ground targets is performed by means of the conventional mechanical scan mode. The rotated antenna structure designed as a small array with two or more RX antenna elements with simultaneous receiver chains allows to instantaneous track multiple moving targets (monopulse principle). The simultaneously operated SAR mode yields areal images of the distribution of stationary scatterers. For ground surveillance application this SAR mode is best suited for identifying possible threats by means of change detection. The feasibility of this concept was tested by means of an experimental radar system comprising of a 94 GHz (W band) FM-CW module with 1 GHz bandwidth and two RX antennas with parallel receiver channels, placed off-axis at a rotating platform. SAR mode and search/track mode were tested during an outdoor measurement campaign. The scenery of two persons walking along a road and partially through forest served as test for the capability to track multiple moving targets. For SAR mode verification an image of the area composed of roads, grassland, woodland and several man-made objects was reconstructed from the measured data.

  12. Image computing techniques to extrapolate data for dust tracking in case of an experimental accident simulation in a nuclear fusion plant

    NASA Astrophysics Data System (ADS)

    Camplani, M.; Malizia, A.; Gelfusa, M.; Barbato, F.; Antonelli, L.; Poggi, L. A.; Ciparisse, J. F.; Salgado, L.; Richetta, M.; Gaudio, P.

    2016-01-01

    In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.

  13. Distributed multirobot sensing and tracking: a behavior-based approach

    NASA Astrophysics Data System (ADS)

    Parker, Lynne E.

    1995-09-01

    An important issue that arises in the automation of many large-scale surveillance and reconnaissance tasks is that of tracking the movements of (or maintaining passive contact with) objects navigating in a bounded area of interest. Oftentimes in these problems, the area to be monitored will move over time or will not permit fixed sensors, thus requiring a team of mobile sensors--or robots--to monitor the area collectively. In these situations, the robots must not only have mechanisms for determining how to track objects and how to fuse information from neighboring robots, but they must also have distributed control strategies for ensuring that the entire area of interest is continually covered to the greatest extent possible. This paper focuses on the distributed control issue by describing a proposed decentralized control mechanism that allows a team of robots to collectively track and monitor objects in an uncluttered area of interest. The approach is based upon an extension to the ALLIANCE behavior-based architecture that generalizes from the domain of loosely-coupled, independent applications to the domain of strongly cooperative applications, in which the action selection of a robot is dependent upon the actions selected by its teammates. We conclude the paper be describing our ongoing implementation of the proposed approach on a team of four mobile robots.

  14. KSC-2009-5015

    NASA Image and Video Library

    2009-08-03

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., a crane moves the SV1 spacecraft, which will be mated with the SV2 at right. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. The spacecraft is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann

  15. KSC-2009-5017

    NASA Image and Video Library

    2009-08-03

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., a crane moves the SV1 spacecraft, toward the SV2 at right. The two spacecraft , which will be mated, are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann

  16. KSC-2009-5195

    NASA Image and Video Library

    2009-09-12

    CAPE CANAVERAL, Fla. – The two halves of the fairing are moved into the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station in Florida. The two-part fairing will be placed around the Space Tracking and Surveillance System – Demonstrator spacecraft for protection during launch. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-4934 (09-22-09) Photo credit: NASA/Cory Huston

  17. KSC-2009-5018

    NASA Image and Video Library

    2009-08-03

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers help guide the movement of the SV1 spacecraft as it is moved toward the SV2 at right. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann

  18. KSC-2009-5016

    NASA Image and Video Library

    2009-08-03

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers help guide the movement of the SV1 spacecraft as it is moved toward the SV2 behind it. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann

  19. KSC-2009-5019

    NASA Image and Video Library

    2009-08-03

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., workers help guide the movement of the SV1 spacecraft as it is moved toward the SV2 at right. The two spacecraft are part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann

  20. KSC-2009-5043

    NASA Image and Video Library

    2009-08-20

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., an overhead crane with a scale is being moved to attach to the SV1-SV2 spacecraft, which will be weighed. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann

  1. KSC-2009-5044

    NASA Image and Video Library

    2009-08-20

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., an overhead crane with a scale is being moved to attach to the SV1-SV2 spacecraft, which will be weighed. The two spacecraft are known as the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, which is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann

  2. Person detection and tracking with a 360° lidar system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2017-10-01

    Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.

  3. Determining the material type of man-made orbiting objects using low-resolution reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Jorgensen, Kira; Africano, John L.; Stansbery, Eugene G.; Kervin, Paul W.; Hamada, Kris M.; Sydney, Paul F.

    2001-12-01

    The purpose of this research is to improve the knowledge of the physical properties of orbital debris, specifically the material type. Combining the use of the fast-tracking United States Air Force Research Laboratory (AFRL) telescopes with a common astronomical technique, spectroscopy, and NASA resources was a natural step toward determining the material type of orbiting objects remotely. Currently operating at the AFRL Maui Optical Site (AMOS) is a 1.6-meter telescope designed to track fast moving objects like those found in lower Earth orbit (LEO). Using the spectral range of 0.4 - 0.9 microns (4000 - 9000 angstroms), researchers can separate materials into classification ranges. Within the above range, aluminum, paints, plastics, and other metals have different absorption features as well as slopes in their respective spectra. The spectrograph used on this telescope yields a three-angstrom resolution; large enough to see smaller features mentioned and thus determine the material type of the object. The results of the NASA AMOS Spectral Study (NASS) are presented herein.

  4. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    PubMed Central

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-01-01

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769

  5. Neural coding in barrel cortex during whisker-guided locomotion

    PubMed Central

    Sofroniew, Nicholas James; Vlasov, Yurii A; Hires, Samuel Andrew; Freeman, Jeremy; Svoboda, Karel

    2015-01-01

    Animals seek out relevant information by moving through a dynamic world, but sensory systems are usually studied under highly constrained and passive conditions that may not probe important dimensions of the neural code. Here, we explored neural coding in the barrel cortex of head-fixed mice that tracked walls with their whiskers in tactile virtual reality. Optogenetic manipulations revealed that barrel cortex plays a role in wall-tracking. Closed-loop optogenetic control of layer 4 neurons can substitute for whisker-object contact to guide behavior resembling wall tracking. We measured neural activity using two-photon calcium imaging and extracellular recordings. Neurons were tuned to the distance between the animal snout and the contralateral wall, with monotonic, unimodal, and multimodal tuning curves. This rich representation of object location in the barrel cortex could not be predicted based on simple stimulus-response relationships involving individual whiskers and likely emerges within cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.12559.001 PMID:26701910

  6. Objective and automated measurement of dynamic vision functions

    NASA Technical Reports Server (NTRS)

    Flom, M. C.; Adams, A. J.

    1976-01-01

    A phoria stimulus array and electro-oculographic (EOG) arrangements for measuring motor and sensory responses of subjects subjected to stress or drug conditions are described, along with experimental procedures. Heterophoria (as oculomotor function) and glare recovery time (time required for photochemical and neural recovery after exposure to a flash stimulus) are measured, in research aimed at developing automated objective measurement of dynamic vision functions. Onset of involuntary optokinetic nystagmus in subjects attempting to track moving stripes (while viewing through head-mounted binocular eyepieces) after exposure to glare serves as an objective measure of glare recovery time.

  7. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  8. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  9. Measuring Positions of Objects using Two or More Cameras

    NASA Technical Reports Server (NTRS)

    Klinko, Steve; Lane, John; Nelson, Christopher

    2008-01-01

    An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.

  10. Grip force control during virtual object interaction: effect of force feedback,accuracy demands, and training.

    PubMed

    Gibo, Tricia L; Bastian, Amy J; Okamura, Allison M

    2014-03-01

    When grasping and manipulating objects, people are able to efficiently modulate their grip force according to the experienced load force. Effective grip force control involves providing enough grip force to prevent the object from slipping, while avoiding excessive force to avoid damage and fatigue. During indirect object manipulation via teleoperation systems or in virtual environments, users often receive limited somatosensory feedback about objects with which they interact. This study examines the effects of force feedback, accuracy demands, and training on grip force control during object interaction in a virtual environment. The task required subjects to grasp and move a virtual object while tracking a target. When force feedback was not provided, subjects failed to couple grip and load force, a capability fundamental to direct object interaction. Subjects also exerted larger grip force without force feedback and when accuracy demands of the tracking task were high. In addition, the presence or absence of force feedback during training affected subsequent performance, even when the feedback condition was switched. Subjects' grip force control remained reminiscent of their employed grip during the initial training. These results motivate the use of force feedback during telemanipulation and highlight the effect of force feedback during training.

  11. Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.

    PubMed

    Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W

    2016-12-14

    The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.

  12. A biological hierarchical model based underwater moving object detection.

    PubMed

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.

  13. A Biological Hierarchical Model Based Underwater Moving Object Detection

    PubMed Central

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. PMID:25140194

  14. Moving Object Detection in Heterogeneous Conditions in Embedded Systems.

    PubMed

    Garbo, Alessandro; Quer, Stefano

    2017-07-01

    This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates.

  15. Moving Object Detection in Heterogeneous Conditions in Embedded Systems

    PubMed Central

    Garbo, Alessandro

    2017-01-01

    This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates. PMID:28671582

  16. System for Thermal Imaging of Hot Moving Objects

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard; Hundley, Jason

    2007-01-01

    The High Altitude/Re-Entry Vehicle Infrared Imaging (HARVII) system is a portable instrumentation system for tracking and thermal imaging of a possibly distant and moving object. The HARVII is designed specifically for measuring the changing temperature distribution on a space shuttle as it reenters the atmosphere. The HARVII system or other systems based on the design of the HARVII system could also be used for such purposes as determining temperature distributions in fires, on volcanoes, and on surfaces of hot models in wind tunnels. In yet another potential application, the HARVII or a similar system would be used to infer atmospheric pollution levels from images of the Sun acquired at multiple wavelengths over regions of interest. The HARVII system includes the Ratio Intensity Thermography System (RITS) and a tracking subsystem that keeps the RITS aimed at the moving object of interest. The subsystem of primary interest here is the RITS (see figure), which acquires and digitizes images of the same scene at different wavelengths in rapid succession. Assuming that the time interval between successive measurements is short enough that temperatures do not change appreciably, the digitized image data at the different wavelengths are processed to extract temperatures according to the principle of ratio-intensity thermography: The temperature at a given location in a scene is inferred from the ratios between or among intensities of infrared radiation from that location at two or more wavelengths. This principle, based on the Stefan-Boltzmann equation for the intensity of electromagnetic radiation as a function of wavelength and temperature, is valid as long as the observed body is a gray or black body and there is minimal atmospheric absorption of radiation.

  17. Tracking in a ground-to-satellite optical link: effects due to lead-ahead and aperture mismatch, including temporal tracking response.

    PubMed

    Basu, Santasri; Voelz, David

    2008-07-01

    Establishing a link between a ground station and a geosynchronous orbiting satellite can be aided greatly with the use of a beacon on the satellite. A tracker, or even an adaptive optics system, can use the beacon during communication or tracking activities to correct beam pointing for atmospheric turbulence and mount jitter effects. However, the pointing lead-ahead required to illuminate the moving object and an aperture mismatch between the tracking and the pointing apertures can limit the effectiveness of the correction, as the sensed tilt will not be the same as the tilt required for optimal transmission to the satellite. We have developed an analytical model that addresses the combined impact of these tracking issues in a ground-to-satellite optical link. We present these results for different tracker/pointer configurations. By setting the low-pass cutoff frequency of the tracking servo properly, the tracking errors can be minimized. The analysis considers geosynchronous Earth orbit satellites as well as low Earth orbit satellites.

  18. Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature

    PubMed Central

    Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat

    2014-01-01

    It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185

  19. Data-to-Decisions S&T Priority Initiative

    DTIC Science & Technology

    2011-11-08

    Context Mapping − Track Performance Model  Multi-Source Tracking − Track Fusion − Track through Gaps − Move-Stop-Move  Performance Based ...Decisions S&T Priority Initiative Dr. Carey Schwartz PSC Lead Office of Naval Research NDIA Disruptive Technologies Conference November 8-9, 2011...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Office of Naval Research ,875 North Randolph Street , Arlington,VA,2217 8. PERFORMING ORGANIZATION REPORT

  20. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  1. An analysis of the precision and reliability of the leap motion sensor and its suitability for static and dynamic tracking.

    PubMed

    Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka

    2014-02-21

    We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.

  2. An Analysis of the Precision and Reliability of the Leap Motion Sensor and Its Suitability for Static and Dynamic Tracking

    PubMed Central

    Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka

    2014-01-01

    We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635

  3. Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.

    PubMed

    Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena

    2014-11-01

    A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.

  4. Additivity of Feature-Based and Symmetry-Based Grouping Effects in Multiple Object Tracking

    PubMed Central

    Wang, Chundi; Zhang, Xuemin; Li, Yongna; Lyu, Chuang

    2016-01-01

    Multiple object tracking (MOT) is an attentional process wherein people track several moving targets among several distractors. Symmetry, an important indicator of regularity, is a general spatial pattern observed in natural and artificial scenes. According to the “laws of perceptual organization” proposed by Gestalt psychologists, regularity is a principle of perceptual grouping, such as similarity and closure. A great deal of research reported that feature-based similarity grouping (e.g., grouping based on color, size, or shape) among targets in MOT tasks can improve tracking performance. However, no additive feature-based grouping effects have been reported where the tracking objects had two or more features. “Additive effect” refers to a greater grouping effect produced by grouping based on multiple cues instead of one cue. Can spatial symmetry produce a similar grouping effect similar to that of feature similarity in MOT tasks? Are the grouping effects based on symmetry and feature similarity additive? This study includes four experiments to address these questions. The results of Experiments 1 and 2 demonstrated the automatic symmetry-based grouping effects. More importantly, an additive grouping effect of symmetry and feature similarity was observed in Experiments 3 and 4. Our findings indicate that symmetry can produce an enhanced grouping effect in MOT and facilitate the grouping effect based on color or shape similarity. The “where” and “what” pathways might have played an important role in the additive grouping effect. PMID:27199875

  5. Development of a Field-Deployable Psychomotor Vigilance Test to Monitor Helicopter Pilot Performance.

    PubMed

    McMahon, Terry W; Newman, David G

    2016-04-01

    Flying a helicopter is a complex psychomotor skill. Fatigue is a serious threat to operational safety, particularly for sustained helicopter operations involving high levels of cognitive information processing and sustained time on task. As part of ongoing research into this issue, the object of this study was to develop a field-deployable helicopter-specific psychomotor vigilance test (PVT) for the purpose of daily performance monitoring of pilots. The PVT consists of a laptop computer, a hand-operated joystick, and a set of rudder pedals. Screen-based compensatory tracking task software includes a tracking ball (operated by the joystick) which moves randomly in all directions, and a second tracking ball which moves horizontally (operated by the rudder pedals). The 5-min test requires the pilot to keep both tracking balls centered. This helicopter-specific PVT's portability and integrated data acquisition and storage system enables daily field monitoring of the performance of individual helicopter pilots. The inclusion of a simultaneous foot-operated tracking task ensures divided attention for helicopter pilots as the movement of both tracking balls requires simultaneous inputs. This PVT is quick, economical, easy to use, and specific to the operational flying task. It can be used for performance monitoring purposes, and as a general research tool for investigating the psychomotor demands of helicopter operations. While reliability and validity testing is warranted, data acquired from this test could help further our understanding of the effect of various factors (such as fatigue) on helicopter pilot performance, with the potential of contributing to helicopter operational safety.

  6. Self-Development Handbook

    DTIC Science & Technology

    2008-01-01

    Self-initiated learning where you define the objective, pace , and process. How to Use This Handbook The contents of this handbook will help you...Your Strengths & Weaknesses Learning to Learn Move Forward & Measure Progress Where Should I Go? The Self-Development Process For further...or for a different career track altogether. Maybe you lack skills or knowledge. Or, maybe there is something you’ve just always wanted to learn or

  7. Compact 3D Camera for Shake-the-Box Particle Tracking

    NASA Astrophysics Data System (ADS)

    Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan

    2017-11-01

    Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.

  8. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  9. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  10. KSC-2009-5014

    NASA Image and Video Library

    2009-08-03

    CAPE CANAVERAL, Fla. – At the Astrotech payload processing facility in Titusville, Fla., a crane is attached to the SV1 spacecraft, part of the Space Tracking and Surveillance System – Demonstrators, or STSS Demo, Program. The SV1 will be lifted and moved to mate with the SV2 on another stand nearby. STSS-Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. The spacecraft is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jim Grossmann

  11. A long-term tropical mesoscale convective systems dataset based on a novel objective automatic tracking algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Xiaomeng; Hu, Chenqi; Huang, Xing; Chu, Yang; Tseng, Yu-heng; Zhang, Guang Jun; Lin, Yanluan

    2018-01-01

    Mesoscale convective systems (MCSs) are important components of tropical weather systems and the climate system. Long-term data of MCS are of great significance in weather and climate research. Using long-term (1985-2008) global satellite infrared (IR) data, we developed a novel objective automatic tracking algorithm, which combines a Kalman filter (KF) with the conventional area-overlapping method, to generate a comprehensive MCS dataset. The new algorithm can effectively track small and fast-moving MCSs and thus obtain more realistic and complete tracking results than previous studies. A few examples are provided to illustrate the potential application of the dataset with a focus on the diurnal variations of MCSs over land and ocean regions. We find that the MCSs occurring over land tend to initiate in the afternoon with greater intensity, but the oceanic MCSs are more likely to initiate in the early morning with weaker intensity. A double peak in the maximum spatial coverage is noted over the western Pacific, especially over the southwestern Pacific during the austral summer. Oceanic MCSs also persist for approximately 1 h longer than their continental counterparts.

  12. Robot tracking system improvements and visual calibration of orbiter position for radiator inspection

    NASA Technical Reports Server (NTRS)

    Tonkay, Gregory

    1990-01-01

    The following separate topics are addressed: (1) improving a robotic tracking system; and (2) providing insights into orbiter position calibration for radiator inspection. The objective of the tracking system project was to provide the capability to track moving targets more accurately by adjusting parameters in the control system and implementing a predictive algorithm. A computer model was developed to emulate the tracking system. Using this model as a test bed, a self-tuning algorithm was developed to tune the system gains. The model yielded important findings concerning factors that affect the gains. The self-tuning algorithms will provide the concepts to write a program to automatically tune the gains in the real system. The section concerning orbiter position calibration provides a comparison to previous work that had been performed for plant growth. It provided the conceptualized routines required to visually determine the orbiter position and orientation. Furthermore, it identified the types of information which are required to flow between the robot controller and the vision system.

  13. Benefit from NASA

    NASA Image and Video Library

    1999-06-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  14. Application of Hybrid Along-Track Interferometry/Displaced Phase Center Antenna Method for Moving Human Target Detection in Forest Environments

    DTIC Science & Technology

    2016-10-01

    ARL-TR-7846 ● OCT 2016 US Army Research Laboratory Application of Hybrid Along-Track Interferometry/ Displaced Phase Center...Research Laboratory Application of Hybrid Along-Track Interferometry/ Displaced Phase Center Antenna Method for Moving Human Target Detection...TYPE Technical Report 3. DATES COVERED (From - To) 2015–2016 4. TITLE AND SUBTITLE Application of Hybrid Along-Track Interferometry/ Displaced

  15. KSC-2009-3663

    NASA Image and Video Library

    2009-05-01

    CAPE CANAVERAL, Fla. – At NASA Kennedy Space Center's Shuttle Landing Facility, the shipping container with the STSS Demonstrator SV-2spacecraft has been moved out of the U.S. Air Force C-17 aircraft. The spacecraft will be transferred to the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Jack Pfaller (Approved for Public Release 09-MDA-4616 [27 May 09])

  16. KSC-2009-3659

    NASA Image and Video Library

    2009-05-01

    CAPE CANAVERAL, Fla. – At NASA Kennedy Space Center's Shuttle Landing Facility, workers move STSS Demonstrator SV-2 spacecraft equipment out of the cargo hold of the U.S. Air Force C-17 aircraft. The spacecraft will be transferred to the Astrotech payload processing facility in Titusville, Fla. The spacecraft is a midcourse tracking technology demonstrator, part of an evolving ballistic missile defense system. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency in late summer. Photo credit: NASA/Jack Pfaller (Approved for Public Release 09-MDA-4616 [27 May 09])

  17. Tracking of "Moving" Fused Auditory Images by Children.

    ERIC Educational Resources Information Center

    Cranford, Jerry L.; And Others

    1993-01-01

    This study evaluated the ability of 30 normally developing children (ages 6-12) to report the perceived location of a stationary fused auditory image (FAI) or track a "moving" FAI. Although subjects performed at normal adult levels with the stationary sound measure, they exhibited a significant age-related trend with the moving sound…

  18. Two applications of time reversal mirrors: seismic radio and seismic radar.

    PubMed

    Hanafy, Sherif M; Schuster, Gerard T

    2011-10-01

    Two seismic applications of time reversal mirrors (TRMs) are introduced and tested with field experiments. The first one is sending, receiving, and decoding coded messages similar to a radio except seismic waves are used. The second one is, similar to radar surveillance, detecting and tracking a moving object(s) in a remote area, including the determination of the objects speed of movement. Both applications require the prior recording of calibration Green's functions in the area of interest. This reference Green's function will be used as a codebook to decrypt the coded message in the first application and as a moving sensor for the second application. Field tests show that seismic radar can detect the moving coordinates (x(t), y(t), z(t)) of a person running through a calibration site. This information also allows for a calculation of his velocity as a function of location. Results with the seismic radio are successful in seismically detecting and decoding coded pulses produced by a hammer. Both seismic radio and radar are highly robust to signals in high noise environments due to the super-stacking property of TRMs. © 2011 Acoustical Society of America

  19. Comparison of different detection methods for persistent multiple hypothesis tracking in wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Hartung, Christine; Spraul, Raphael; Schuchert, Tobias

    2017-10-01

    Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.

  20. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  1. CPV for the rooftop market: novel approaches to tracking integration in photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Apostoleris, Harry; Stefancich, Marco; Alexander-Katz, Alfredo; Chiesa, Matteo

    2016-03-01

    Concentrated photovoltaics (CPV) has long been recognized as an effective approach to enabling the use of high cost, high-efficiency solar cells for enhanced solar energy conversion, but is excluded from the domestic rooftop market due to the requirement that solar concentrators track the sun. This market may be opened up by integrating of the tracking mechanism into the module itself. Tracking integration may take the form of a miniaturization of a conventional tracking apparatus, or optical tracking, in which tracking is achieved through variation of optical properties such as refractive index or transparency rather than mechanical movement of the receiver. We have demonstrated a simple system using a heat-responsive transparency switching material to create a moving aperture that tracks the position of a moving light spot. We use this behavior to create a concentrating light trap with a moving aperture that reactively tracks the sun. Taking the other approach, we have fabricated 3D-printed parabolic mini-concentrators which can track the sun using small motors in a low-profile geometry. We characterize the performance of the concentrators and consider the impact of tracking integration on the broader PV market.

  2. Online tracking of outdoor lighting variations for augmented reality with moving cameras.

    PubMed

    Liu, Yanli; Granier, Xavier

    2012-04-01

    In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.

  3. An intelligent, free-flying robot

    NASA Technical Reports Server (NTRS)

    Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.

    1988-01-01

    The ground-based demonstration of EVA Retriever, a voice-supervised, intelligent, free-flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out, (2) searches for and acquires the target, (3) plans and executes a rendezvous while continuously tracking the target, (4) avoids stationary and moving obstacles, (5) reaches for and grapples the target, (6) returns to transfer the object, and (7) returns to base.

  4. Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera

    NASA Astrophysics Data System (ADS)

    Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.

    2017-09-01

    Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.

  5. An intelligent, free-flying robot

    NASA Technical Reports Server (NTRS)

    Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.; Phinney, Dale E.

    1989-01-01

    The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base.

  6. Array-based infra-red detection: an enabling technology for people counting, sensing, tracking, and intelligent detection

    NASA Astrophysics Data System (ADS)

    Stogdale, Nick; Hollock, Steve; Johnson, Neil; Sumpter, Neil

    2003-09-01

    A 16x16 element un-cooled pyroelectric detector array has been developed which, when allied with advanced tracking and detection algorithms, has created a universal detector with multiple applications. Low-cost manufacturing techniques are used to fabricate a hybrid detector, intended for economic use in commercial markets. The detector has found extensive application in accurate people counting, detection, tracking, secure area protection, directional sensing and area violation; topics which are all pertinent to the provision of Homeland Security. The detection and tracking algorithms have, when allied with interpolation techniques, allowed a performance much higher than might be expected from a 16x16 array. This paper reviews the technology, with particular attention to the array structure, algorithms and interpolation techniques and outlines its application in a number of challenging market areas. Viewed from above, moving people are seen as 'hot blobs' moving through the field of view of the detector; background clutter or stationary objects are not seen and the detector works irrespective of lighting or environmental conditions. Advanced algorithms detect the people and extract size, shape, direction and velocity vectors allowing the number of people to be detected and their trajectories of motion to be tracked. Provision of virtual lines in the scene allows bi-directional counting of people flowing in and out of an entrance or area. Definition of a virtual closed area in the scene allows counting of the presence of stationary people within a defined area. Definition of 'counting lines' allows the counting of people, the ability to augment access control devices by confirming a 'one swipe one entry' judgement and analysis of the flow and destination of moving people. For example, passing the 'wrong way' up a denied passageway can be detected. Counting stationary people within a 'defined area' allows the behaviour and size of groups of stationary people to be analysed and counted, an alarm condition can also be generated when people stray into such areas.

  7. High-Accuracy Measurement of Small Movement of an Object behind Cloth Using Airborne Ultrasound

    NASA Astrophysics Data System (ADS)

    Hoshiba, Kotaro; Hirata, Shinnosuke; Hachiya, Hiroyuki

    2013-07-01

    The acoustic measurement of vital information such as breathing and heartbeat in the standing position whilst the subject is wearing clothes is a difficult problem. In this paper, we present the basic experimental results to measure small movement of an object behind cloth. We measured acoustic characteristics of various types of cloth to obtain the transmission loss through cloth. To observe the relationship between measurement error and target speed under a low signal-to-noise ratio (SNR), we tried to measure the movement of an object behind cloth. The target was placed apart from the cloth to separate the target reflection from the cloth reflection. We found that a small movement of less than 6 mm/s could be observed using the M-sequence, moving target indicator (MTI) filter, and tracking phase difference, when the SNR was less than 0 dB. We also present the results of theoretical error analysis in the MTI filter and phase tracking for high-accuracy measurement. Characteristics of the systematic error were clarified.

  8. Moving Object Detection Using a Parallax Shift Vector Algorithm

    NASA Astrophysics Data System (ADS)

    Gural, Peter S.; Otto, Paul R.; Tedesco, Edward F.

    2018-07-01

    There are various algorithms currently in use to detect asteroids from ground-based observatories, but they are generally restricted to linear or mildly curved movement of the target object across the field of view. Space-based sensors in high inclination, low Earth orbits can induce significant parallax in a collected sequence of images, especially for objects at the typical distances of asteroids in the inner solar system. This results in a highly nonlinear motion pattern of the asteroid across the sensor, which requires a more sophisticated search pattern for detection processing. Both the classical pattern matching used in ground-based asteroid search and the more sensitive matched filtering and synthetic tracking techniques, can be adapted to account for highly complex parallax motion. A new shift vector generation methodology is discussed along with its impacts on commonly used detection algorithms, processing load, and responsiveness to asteroid track reporting. The matched filter, template generator, and pattern matcher source code for the software described herein are available via GitHub.

  9. Low-cost solar tracking system

    NASA Technical Reports Server (NTRS)

    Miller, C. G.; Stephens, J. B.

    1975-01-01

    Smaller heat-collector is moved to stay in focus with the sun, instead of moving reflector. Tracking can be controlled by storing data of predicted solar positions or by applying conventional sun-sensing devices to follow solar movement.

  10. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    PubMed

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  11. Anticipatory smooth eye movements with random-dot kinematograms

    PubMed Central

    Santos, Elio M.; Gnang, Edinah K.; Kowler, Eileen

    2012-01-01

    Anticipatory smooth eye movements were studied in response to expectations of motion of random-dot kinematograms (RDKs). Dot lifetime was limited (52–208 ms) to prevent selection and tracking of the motion of local elements and to disrupt the perception of an object moving across space. Anticipatory smooth eye movements were found in response to cues signaling the future direction of global RDK motion, either prior to the onset of the RDK or prior to a change in its direction of motion. Cues signaling the lifetime of the dots were not effective. These results show that anticipatory smooth eye movements can be produced by expectations of global motion and do not require a sustained representation of an object or set of objects moving across space. At the same time, certain properties of global motion (direction) were more sensitive to cues than others (dot lifetime), suggesting that the rules by which prediction operates to influence pursuit may go beyond simple associations between cues and the upcoming motion of targets. PMID:23027686

  12. General features of the retinal connectome determine the computation of motion anticipation

    PubMed Central

    Johnston, Jamie; Lagnado, Leon

    2015-01-01

    Motion anticipation allows the visual system to compensate for the slow speed of phototransduction so that a moving object can be accurately located. This correction is already present in the signal that ganglion cells send from the retina but the biophysical mechanisms underlying this computation are not known. Here we demonstrate that motion anticipation is computed autonomously within the dendritic tree of each ganglion cell and relies on feedforward inhibition. The passive and non-linear interaction of excitatory and inhibitory synapses enables the somatic voltage to encode the actual position of a moving object instead of its delayed representation. General rather than specific features of the retinal connectome govern this computation: an excess of inhibitory inputs over excitatory, with both being randomly distributed, allows tracking of all directions of motion, while the average distance between inputs determines the object velocities that can be compensated for. DOI: http://dx.doi.org/10.7554/eLife.06250.001 PMID:25786068

  13. Off-Range Beaked Whale Studies (ORBS): Baseline Data and Tagging Development for Northern Bottlenose Whales (Hyperoodon ampulatus) off Jan Mayen, Norway

    DTIC Science & Technology

    2015-09-30

    02.003’N, 07°01.981’W) To be recovered in 2016 Ranging code #08D1; releasing code #0803 In collaboration with Rune Hansen of the University of...the animal with PTT 134760 was tracked moving all the way south to the Azores Archipelago. Figure courtesy of Rune Hansen. Objective 4. conduct

  14. Near-Earth Asteroid Tracking (NEAT): First Year Results

    NASA Astrophysics Data System (ADS)

    Helin, E. F.; Rabinowitz, D. L.; Pravdo, S. H.; Lawrence, K. J.

    1997-07-01

    The successful detection of Near-Earth Asteroids (NEAs) has been demonstrated by the Near-Earth Asteroid Tracking (NEAT) program at the Jet Propulsion Laboratory during its first year of operation. The NEAT CCD camera system is installed on the U. S. Air Force 1-m GEODSS telescope in Maui. Using state-of-the-art software and hardware, the system initiates nightly transmitted observing script from JPL, moves the telescopes for successive exposures of the selected fields, detects moving objects as faint as V=20.5 in 40 s exposures, determines their astrometric positions, and downloads the data for review at JPL in the morning. The NEAT system is detecting NEAs larger than 200m, comets, and other unique objects at a rate competitive with current operating systems, and bright enough for important physical studies on moderate-sized telescopes. NEAT has detected over 10,000 asteroids over a wide range of magnitudes, demonstrating the excellent capability of the NEAT system. Fifty-five percent of the detections are new objects and over 900 of them have been followed on a second night to receive designation from the Minor Planet Center. 14 NEAs (9 Amors, 4 Apollos, and 1 Aten) have been discovered since March 1996. Also, 2 long period comets and 1996 PW, an asteroidal object with an orbit of a long-period comet, with an eccentricity of 0.992 and orbital period of 5900 years. Program discoveries will be reviewed along with analysis of results pertaining to the discovery efficiency, distribution on the sky, range of orbits and magnitudes. Related abstract: Lawrence, K., et al., 1997 DPS

  15. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  16. Design of a Holonic Control Architecture for Distributed Sensor Management

    DTIC Science & Technology

    2009-09-01

    Tracking tasks require only intermit - tent access to the sensors to maintain a given track quality. The higher the specified quality, the more often...resolution of the sensor (i.e., sensor mode), which can be adjusted to compensate for fast moving targets tracked over long ranges, or slower moving...but provides higher data update rates that are beneficial when tracking fast agile targets (i.e., a fighter). Table A.2 illustrates the dependence of

  17. A low cost fMRI-compatible tracking system using the Nintendo Wii remote.

    PubMed

    Modroño, Cristián; Rodríguez-Hernández, Antonio F; Marcano, Francisco; Navarrete, Gorka; Burunat, Enrique; Ferrer, Marta; Monserrat, Raquel; González-Mora, José L

    2011-11-15

    It is sometimes necessary during functional magnetic resonance imaging (fMRI) experiments to capture different movements made by the subjects, e.g. to enable them to control an item or to analyze its kinematics. The aim of this work is to present an inexpensive hand tracking system suitable for use in a high field MRI environment. It works by introducing only one light-emitting diode (LED) in the magnet room, and by receiving its signal with a Nintendo Wii remote (the primary controller for the Nintendo Wii console) placed outside in the control room. Thus, it is possible to take high spatial and temporal resolution registers of a moving point that, in this case, is held by the hand. We tested it using a ball and racket virtual game inside a 3 Tesla MRI scanner to demonstrate the usefulness of the system. The results show the involvement of a number of areas (mainly occipital and frontal, but also parietal and temporal) when subjects are trying to stop an object that is approaching from a first person perspective, matching previous studies performed with related visuomotor tasks. The system presented here is easy to implement, easy to operate and does not produce important head movements or artifacts in the acquired images. Given its low cost and ready availability, the method described here is ideal for use in basic and clinical fMRI research to track one or more moving points that can correspond to limbs, fingers or any other object whose position needs to be known. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Measurement of vertical track deflection from a moving rail car.

    DOT National Transportation Integrated Search

    2013-02-01

    The University of Nebraska has been conducting research sponsored by the Federal Railroad Administrations Office of Research and Development to develop a system that measures vertical track deflection/modulus from a moving rail car. Previous work ...

  19. 4D Optimization of Scanned Ion Beam Tracking Therapy for Moving Tumors

    PubMed Central

    Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph

    2014-01-01

    Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking. PMID:24889215

  20. 4D optimization of scanned ion beam tracking therapy for moving tumors

    NASA Astrophysics Data System (ADS)

    Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph

    2014-07-01

    Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking.

  1. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  2. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  3. SU-G-BRA-14: Dose in a Rigidly Moving Phantom with Jaw and MLC Compensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, E; Lucas, D

    Purpose: To validate dose calculation for a rigidly moving object with jaw motion and MLC shifts to compensate for the motion in a TomoTherapy™ treatment delivery. Methods: An off-line version of the TomoTherapy dose calculator was extended to perform dose calculations for rigidly moving objects. A variety of motion traces were added to treatment delivery plans, along with corresponding jaw compensation and MLC shift compensation profiles. Jaw compensation profiles were calculated by shifting the jaws such that the center of the treatment beam moved by an amount equal to the motion in the longitudinal direction. Similarly, MLC compensation profiles weremore » calculated by shifting the MLC leaves by an amount that most closely matched the motion in the transverse direction. The same jaw and MLC compensation profiles were used during simulated treatment deliveries on a TomoTherapy system, and film measurements were obtained in a rigidly moving phantom. Results: The off-line TomoTherapy dose calculator accurately predicted dose profiles for a rigidly moving phantom along with jaw motion and MLC shifts to compensate for the motion. Calculations matched film measurements to within 2%/1 mm. Jaw and MLC compensation substantially reduced the discrepancy between the delivered dose distribution and the calculated dose with no motion. For axial motion, the compensated dose matched the no-motion dose within 2%/1mm. For transverse motion, the dose matched within 2%/3mm (approximately half the width of an MLC leaf). Conclusion: The off-line TomoTherapy dose calculator accurately computes dose delivered to a rigidly moving object, and accurately models the impact of moving the jaws and shifting the MLC leaf patterns to compensate for the motion. Jaw tracking and MLC leaf shifting can effectively compensate for the dosimetric impact of motion during a TomoTherapy treatment delivery.« less

  4. Beyond Group: Multiple Person Tracking via Minimal Topology-Energy-Variation.

    PubMed

    Gao, Shan; Ye, Qixiang; Xing, Junliang; Kuijper, Arjan; Han, Zhenjun; Jiao, Jianbin; Ji, Xiangyang

    2017-12-01

    Tracking multiple persons is a challenging task when persons move in groups and occlude each other. Existing group-based methods have extensively investigated how to make group division more accurately in a tracking-by-detection framework; however, few of them quantify the group dynamics from the perspective of targets' spatial topology or consider the group in a dynamic view. Inspired by the sociological properties of pedestrians, we propose a novel socio-topology model with a topology-energy function to factor the group dynamics of moving persons and groups. In this model, minimizing the topology-energy-variance in a two-level energy form is expected to produce smooth topology transitions, stable group tracking, and accurate target association. To search for the strong minimum in energy variation, we design the discrete group-tracklet jump moves embedded in the gradient descent method, which ensures that the moves reduce the energy variation of group and trajectory alternately in the varying topology dimension. Experimental results on both RGB and RGB-D data sets show the superiority of our proposed model for multiple person tracking in crowd scenes.

  5. Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target

    PubMed Central

    Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji

    2009-01-01

    In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326

  6. A comparative study on the motion of various objects inside an air tunnel

    NASA Astrophysics Data System (ADS)

    Shibani, Wanis Mustafa E.; Zulkafli, Mohd Fadhli; Basunoand, Bambang

    2017-04-01

    This paper presents a comparative study of the movement of various rigid bodies through an air tunnel for both two and three-dimensional flow problems. Three kinds of objects under investigation are in the form of box, ball and wedge shape. The investigation was carried out through the use of a commercial CFD software, named Fluent, in order to determine aerodynamic forces, act on the object as well as to track its movement. Adopted numerical scheme is the time-averaged Navier-Stokes equation with k - ɛ as its turbulence modeling and the scheme was solved using the SIMPLE algorithm. Triangular elements grid was used in 2D case, while tetrahedron elements for 3D case. Grid independence studies were performed for each problem from a coarse to fine grid. The motion of an object is restricted in one direction only and is found by tracking its center of mass at every time step. The result indicates the movement of the object is increasing as the flow moves down stream and the box have the fastest speed compare to the other two shapes for both 2D and 3D cases.

  7. A novel infrared small moving target detection method based on tracking interest points under complicated background

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying

    2014-07-01

    Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.

  8. Quantitative analysis of the improvement in omnidirectional maritime surveillance and tracking due to real-time image enhancement

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.

    2011-05-01

    Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.

  9. Tracking Objects with Networked Scattered Directional Sensors

    NASA Astrophysics Data System (ADS)

    Plarre, Kurt; Kumar, P. R.

    2007-12-01

    We study the problem of object tracking using highly directional sensors—sensors whose field of vision is a line or a line segment. A network of such sensors monitors a certain region of the plane. Sporadically, objects moving in straight lines and at a constant speed cross the region. A sensor detects an object when it crosses its line of sight, and records the time of the detection. No distance or angle measurements are available. The task of the sensors is to estimate the directions and speeds of the objects, and the sensor lines, which are unknown a priori. This estimation problem involves the minimization of a highly nonconvex cost function. To overcome this difficulty, we introduce an algorithm, which we call "adaptive basis algorithm." This algorithm is divided into three phases: in the first phase, the algorithm is initialized using data from six sensors and four objects; in the second phase, the estimates are updated as data from more sensors and objects are incorporated. The third phase is an optional coordinated transformation. The estimation is done in an "ad-hoc" coordinate system, which we call "adaptive coordinate system." When more information is available, for example, the location of six sensors, the estimates can be transformed to the "real-world" coordinate system. This constitutes the third phase.

  10. The Impact Imperative: A Space Infrastructure Enabling a Multi-Tiered Earth Defense

    NASA Technical Reports Server (NTRS)

    Campbell, Jonathan W.; Phipps, Claude; Smalley, Larry; Reilly, James; Boccio, Dona

    2003-01-01

    Impacting at hypervelocity, an asteroid struck the Earth approximately 65 million years ago in the Yucatan Peninsula a m . This triggered the extinction of almost 70% of the species of life on Earth including the dinosaurs. Other impacts prior to this one have caused even greater extinctions. Preventing collisions with the Earth by hypervelocity asteroids, meteoroids, and comets is the most important immediate space challenge facing human civilization. This is the Impact Imperative. We now believe that while there are about 2000 earth orbit crossing rocks greater than 1 kilometer in diameter, there may be as many as 200,000 or more objects in the 100 m size range. Can anything be done about this fundamental existence question facing our civilization? The answer is a resounding yes! By using an intelligent combination of Earth and space based sensors coupled with an infrastructure of high-energy laser stations and other secondary mitigation options, we can deflect inbound asteroids, meteoroids, and comets and prevent them &om striking the Earth. This can be accomplished by irradiating the surface of an inbound rock with sufficiently intense pulses so that ablation occurs. This ablation acts as a small rocket incrementally changing the shape of the rock's orbit around the Sun. One-kilometer size rocks can be moved sufficiently in about a month while smaller rocks may be moved in a shorter time span. We recommend that space objectives be immediately reprioritized to start us moving quickly towards an infrastructure that will support a multiple option defense capability. Planning and development for a lunar laser facility should be initiated immediately in parallel with other options. All mitigation options are greatly enhanced by robust early warning, detection, and tracking resources to find objects sufficiently prior to Earth orbit passage in time to allow significant intervention. Infrastructure options should include ground, LEO, GEO, Lunar, and libration point laser and sensor stations for providing early warning, tracking, and deflection. Other options should include space interceptors that will carry both laser and nuclear ablators for close range work. Response options must be developed to deal with the consequences of an impact should we move too slowly.

  11. Vehicle and cargo container inspection system for drugs

    NASA Astrophysics Data System (ADS)

    Verbinski, Victor V.; Orphan, Victor J.

    1999-06-01

    A vehicle and cargo container inspection system has been developed which uses gamma-ray radiography to produce digital images useful for detection of drugs and other contraband. The system is comprised of a 1 Ci Cs137 gamma-ray source collimated into a fan beam which is aligned with a linear array of NaI gamma-ray detectors located on the opposite side of the container. The NaI detectors are operated in the pulse-counting mode. A digital image of the vehicle or container is obtained by moving the aligned source and detector array relative to the object. Systems have been demonstrated in which the object is stationary (source and detector array move on parallel tracks) and in which the object moves past a stationary source and detector array. Scanning speeds of ˜30 cm/s with a pixel size (at the object) of ˜1 cm have been achieved. Faster scanning speeds of ˜2 m/s have been demonstrated on railcars with more modest spatial resolution (4 cm pixels). Digital radiographic images are generated from the detector count rates. These images, recorded on a PC-based data acquisition and display system, are shown from several applications: 1) inspection of trucks and containers at a border crossing, 2) inspection of railcars at a border crossing, 3) inspection of outbound cargo containers for stolen automobiles, and 4) inspection of trucks and cars for terrorist bombs.

  12. Human image tracking technique applied to remote collaborative environments

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Suzuki, Gen

    1993-10-01

    To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.

  13. Feature Quantization and Pooling for Videos

    DTIC Science & Technology

    2014-05-01

    does not score high on this metric. The exceptions are videos where objects move - for exam- ple, the ice skaters (“ice”) and the tennis player , tracked...convincing me that my future path should include a PhD. Martial and Fernando, your energy is exceptional! Its influence can be seen in the burning...3.17 BMW enables Interpretation of similar regions across videos ( tennis ). . . . . . . 50 3.18 Common Motion Words across videos with large camera

  14. Prediction processes during multiple object tracking (MOT): involvement of dorsal and ventral premotor cortices

    PubMed Central

    Atmaca, Silke; Stadler, Waltraud; Keitel, Anne; Ott, Derek V M; Lepsien, Jöran; Prinz, Wolfgang

    2013-01-01

    Background The multiple object tracking (MOT) paradigm is a cognitive task that requires parallel tracking of several identical, moving objects following nongoal-directed, arbitrary motion trajectories. Aims The current study aimed to investigate the employment of prediction processes during MOT. As an indicator for the involvement of prediction processes, we targeted the human premotor cortex (PM). The PM has been repeatedly implicated to serve the internal modeling of future actions and action effects, as well as purely perceptual events, by means of predictive feedforward functions. Materials and methods Using functional magnetic resonance imaging (fMRI), BOLD activations recorded during MOT were contrasted with those recorded during the execution of a cognitive control task that used an identical stimulus display and demanded similar attentional load. A particular effort was made to identify and exclude previously found activation in the PM-adjacent frontal eye fields (FEF). Results We replicated prior results, revealing occipitotemporal, parietal, and frontal areas to be engaged in MOT. Discussion The activation in frontal areas is interpreted to originate from dorsal and ventral premotor cortices. The results are discussed in light of our assumption that MOT engages prediction processes. Conclusion We propose that our results provide first clues that MOT does not only involve visuospatial perception and attention processes, but prediction processes as well. PMID:24363971

  15. Multiple Drosophila Tracking System with Heading Direction

    PubMed Central

    Sirigrivatanawong, Pudith; Arai, Shogo; Thoma, Vladimiros; Hashimoto, Koichi

    2017-01-01

    Machine vision systems have been widely used for image analysis, especially that which is beyond human ability. In biology, studies of behavior help scientists to understand the relationship between sensory stimuli and animal responses. This typically requires the analysis and quantification of animal locomotion. In our work, we focus on the analysis of the locomotion of the fruit fly Drosophila melanogaster, a widely used model organism in biological research. Our system consists of two components: fly detection and tracking. Our system provides the ability to extract a group of flies as the objects of concern and furthermore determines the heading direction of each fly. As each fly moves, the system states are refined with a Kalman filter to obtain the optimal estimation. For the tracking step, combining information such as position and heading direction with assignment algorithms gives a successful tracking result. The use of heading direction increases the system efficiency when dealing with identity loss and flies swapping situations. The system can also operate with a variety of videos with different light intensities. PMID:28067800

  16. Modeling peripheral vision for moving target search and detection.

    PubMed

    Yang, Ji Hyun; Huston, Jesse; Day, Michael; Balogh, Imre

    2012-06-01

    Most target search and detection models focus on foveal vision. In reality, peripheral vision plays a significant role, especially in detecting moving objects. There were 23 subjects who participated in experiments simulating target detection tasks in urban and rural environments while their gaze parameters were tracked. Button responses associated with foveal object and peripheral object (PO) detection and recognition were recorded. In an urban scenario, pedestrians appearing in the periphery holding guns were threats and pedestrians with empty hands were non-threats. In a rural scenario, non-U.S. unmanned aerial vehicles (UAVs) were considered threats and U.S. UAVs non-threats. On average, subjects missed detecting 2.48 POs among 50 POs in the urban scenario and 5.39 POs in the rural scenario. Both saccade reaction time and button reaction time can be predicted by peripheral angle and entrance speed of POs. Fast moving objects were detected faster than slower objects and POs appearing at wider angles took longer to detect than those closer to the gaze center. A second-order mixed-effect model was applied to provide each subject's prediction model for peripheral target detection performance as a function of eccentricity angle and speed. About half the subjects used active search patterns while the other half used passive search patterns. An interactive 3-D visualization tool was developed to provide a representation of macro-scale head and gaze movement in the search and target detection task. An experimentally validated stochastic model of peripheral vision in realistic target detection scenarios was developed.

  17. A hybrid multi-objective imperialist competitive algorithm and Monte Carlo method for robust safety design of a rail vehicle

    NASA Astrophysics Data System (ADS)

    Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi

    2017-10-01

    This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.

  18. Predictive encoding of moving target trajectory by neurons in the parabigeminal nucleus

    PubMed Central

    Ma, Rui; Cui, He; Lee, Sang-Hun; Anastasio, Thomas J.

    2013-01-01

    Intercepting momentarily invisible moving objects requires internally generated estimations of target trajectory. We demonstrate here that the parabigeminal nucleus (PBN) encodes such estimations, combining sensory representations of target location, extrapolated positions of briefly obscured targets, and eye position information. Cui and Malpeli (Cui H, Malpeli JG. J Neurophysiol 89: 3128–3142, 2003) reported that PBN activity for continuously visible tracked targets is determined by retinotopic target position. Here we show that when cats tracked moving, blinking targets the relationship between activity and target position was similar for ON and OFF phases (400 ms for each phase). The dynamic range of activity evoked by virtual targets was 94% of that of real targets for the first 200 ms after target offset and 64% for the next 200 ms. Activity peaked at about the same best target position for both real and virtual targets. PBN encoding of target position takes into account changes in eye position resulting from saccades, even without visual feedback. Since PBN response fields are retinotopically organized, our results suggest that activity foci associated with real and virtual targets at a given target position lie in the same physical location in the PBN, i.e., a retinotopic as well as a rate encoding of virtual-target position. We also confirm that PBN activity is specific to the intended target of a saccade and is predictive of which target will be chosen if two are offered. A Bayesian predictor-corrector model is presented that conceptually explains the differences in the dynamic ranges of PBN neuronal activity evoked during tracking of real and virtual targets. PMID:23365185

  19. Measuring attention using induced motion.

    PubMed

    Gogel, W C; Sharkey, T J

    1989-01-01

    Attention was measured by means of its effect upon induced motion. Perceived horizontal motion was induced in a vertically moving test spot by the physical horizontal motion of inducing objects. All stimuli were in a frontoparallel plane. The induced motion vectored with the physical motion to produce a clockwise or counterclockwise tilt in the apparent path of motion of the test spot. Either a single inducing object or two inducing objects moving in opposite directions were used. Twelve observers were instructed to attend to or to ignore the single inducing object while fixating the test object and, when the two opposing inducing objects were present, to attend to one inducing object while ignoring the other. Tracking of the test spot was visually monitored. The tilt of the path of apparent motion of the test spot was measured by tactile adjustment of a comparison rod. It was found that the measured tilt was substantially larger when the single inducing object was attended rather than ignored. For the two inducing objects, attending to one while ignoring the other clearly increased the effectiveness of the attended inducing object. The results are analyzed in terms of the distinction between voluntary and involuntary attention. The advantages of measuring attention by its effect on induced motion as compared with the use of a precueing procedure, and a hypothesis regarding the role of attention in modifying perceived spatial characteristics are discussed.

  20. GEO Optical Data Association with Concurrent Metric and Photometric Information

    NASA Astrophysics Data System (ADS)

    Dao, P.; Monet, D.

    Data association in a congested area of the GEO belt with occasional visits by non-resident objects can be treated as a Multi-Target-Tracking (MTT) problem. For a stationary sensor surveilling the GEO belt, geosynchronous and near GEO objects are not completely motionless in the earth-fixed frame and can be observed as moving targets. In some clusters, metric or positional information is insufficiently accurate or up-to-date to associate the measurements. In the presence of measurements with uncertain origin, star tracks (residuals) and other sensor artifacts, heuristic techniques based on hard decision assignment do not perform adequately. In the MMT community, Bar-Shalom [2009 Bar-Shalom] was first in introducing the use of measurements to update the state of the target of interest in the tracking filter, e.g. Kalman filter. Following Bar-Shalom’s idea, we use the Probabilistic Data Association Filter (PDAF) but to make use of all information obtainable in the measurement of three-axis-stabilized GEO satellites, we combine photometric with metric measurements to update the filter. Therefore, our technique Concurrent Spatio- Temporal and Brightness (COSTB) has the stand-alone ability of associating a track with its identity –for resident objects. That is possible because the light curve of a stabilized GEO satellite changes minimally from night to night. We exercised COSTB on camera cadence data to associate measurements, correct mistags and detect non-residents in a simulated near real time cadence. Data on GEO clusters were used.

  1. Robust Arm and Hand Tracking by Unsupervised Context Learning

    PubMed Central

    Spruyt, Vincent; Ledda, Alessandro; Philips, Wilfried

    2014-01-01

    Hand tracking in video is an increasingly popular research field due to the rise of novel human-computer interaction methods. However, robust and real-time hand tracking in unconstrained environments remains a challenging task due to the high number of degrees of freedom and the non-rigid character of the human hand. In this paper, we propose an unsupervised method to automatically learn the context in which a hand is embedded. This context includes the arm and any other object that coherently moves along with the hand. We introduce two novel methods to incorporate this context information into a probabilistic tracking framework, and introduce a simple yet effective solution to estimate the position of the arm. Finally, we show that our method greatly increases robustness against occlusion and cluttered background, without degrading tracking performance if no contextual information is available. The proposed real-time algorithm is shown to outperform the current state-of-the-art by evaluating it on three publicly available video datasets. Furthermore, a novel dataset is created and made publicly available for the research community. PMID:25004155

  2. Tracking and characterizing the head motion of unanaesthetized rats in positron emission tomography

    PubMed Central

    Kyme, Andre; Meikle, Steven; Baldock, Clive; Fulton, Roger

    2012-01-01

    Positron emission tomography (PET) is an important in vivo molecular imaging technique for translational research. Imaging unanaesthetized rats using motion-compensated PET avoids the confounding impact of anaesthetic drugs and enables animals to be imaged during normal or evoked behaviour. However, there is little published data on the nature of rat head motion to inform the design of suitable marker-based motion-tracking set-ups for brain imaging—specifically, set-ups that afford close to uninterrupted tracking. We performed a systematic study of rat head motion parameters for unanaesthetized tube-bound and freely moving rats with a view to designing suitable motion-tracking set-ups in each case. For tube-bound rats, using a single appropriately placed binocular tracker, uninterrupted tracking was possible greater than 95 per cent of the time. For freely moving rats, simulations and measurements of a live subject indicated that two opposed binocular trackers are sufficient (less than 10% interruption to tracking) for a wide variety of behaviour types. We conclude that reliable tracking of head pose can be achieved with marker-based optical-motion-tracking systems for both tube-bound and freely moving rats undergoing PET studies without sedation. PMID:22718992

  3. An examination of along-track interferometry for detecting ground moving targets

    NASA Technical Reports Server (NTRS)

    Chen, Curtis W.; Chapin, Elaine; Muellerschoen, Ron; Hensley, Scott

    2005-01-01

    Along-track interferometry (ATI) is an interferometric synthetic aperture radar technique primarily used to measure Earth-surface velocities. We present results from an airborne experiment demonstrating phenomenology specific to the context of observing discrete ground targets moving admidst a stationary clutter background.

  4. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    NASA Astrophysics Data System (ADS)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  5. Substructure method in high-speed monorail dynamic problems

    NASA Astrophysics Data System (ADS)

    Ivanchenko, I. I.

    2008-12-01

    The study of actions of high-speed moving loads on bridges and elevated tracks remains a topical problem for transport. In the present study, we propose a new method for moving load analysis of elevated tracks (monorail structures or bridges), which permits studying the interaction between two strained objects consisting of rod systems and rigid bodies with viscoelastic links; one of these objects is the moving load (monorail rolling stock), and the other is the carrying structure (monorail elevated track or bridge). The methods for moving load analysis of structures were developed in numerous papers [1-15]. At the first stage, when solving the problem about a beam under the action of the simplest moving load such as a moving weight, two fundamental methods can be used; the same methods are realized for other structures and loads. The first method is based on the use of a generalized coordinate in the expansion of the deflection in the natural shapes of the beam, and the problem is reduced to solving a system of ordinary differential equations with variable coefficients [1-3]. In the second method, after the "beam-weight" system is decomposed, just as in the problem with the weight impact on the beam [4], solving the problem is reduced to solving an integral equation for the dynamic weight reaction [6, 7]. In [1-3], an increase in the number of retained forms leads to an increase in the order of the system of equations; in [6, 7], difficulties arise when solving the integral equations related to the conditional stability of the step procedures. The method proposed in [9, 14] for beams and rod systems combines the above approaches and eliminates their drawbacks, because it permits retaining any necessary number of shapes in the deflection expansion and has a resolving system of equations with an unconditionally stable integration scheme and with a minimum number of unknowns, just as in the method of integral equations [6, 7]. This method is further developed for combined schemes modeling a strained elastic compound moving structure and a monorail elevated track. The problems of development of methods for dynamic analysis of monorails are very topical, especially because of increasing speeds of the rolling stock motion. These structures are studied in [16-18]. In the present paper, the above problem is solved by using the method for the moving load analysis and a step procedure of integration with respect to time, which were proposed in [9, 19], respectively. Further, these components are used to enlarge the possibilities of the substructure method in problems of dynamics. In the approach proposed for moving load analysis of structures, for a substructure (having the shape of a boundary element or a superelement) we choose an object moving at a constant speed (a monorail rolling stock); in this case, we use rod boundary elements of large length, which are gathered in a system modeling these objects. In particular, sets of such elements form a model of a monorail rolling stock, namely, carriage hulls, wheeled carts, elements of the wheel spring suspension, models of continuous beams of monorail ways and piers with foundations admitting emergency subsidence and unilateral links. These specialized rigid finite elements with linear and nonlinear links, included into the set of earlier proposed finite elements [14, 19], permit studying unsteady vibrations in the "monorail train-elevated track" (MTET) system taking into account various irregularities on the beam-rail, the pier emergency subsidence, and their elastic support by the basement. In this case, a high degree of the structure spatial digitization is obtained by using rods with distributed parameters in the analysis. The displacements are approximated by linear functions and trigonometric Fourier series, which, as was already noted, permits increasing the number of degrees of freedom of the system under study simultaneously preserving the order of the resolving system of equations. This approach permits studying the stress-strain state in the MTET system and determining accelerations at the desired points of the rolling stock. The proposed numerical procedure permits uniquely solving linear and nonlinear differential equations describing the operation of the model, which replaces the system by a monorail rolling stock consisting of several specialized mutually connected cars and a system of continuous beams on elastic inertial supports. This approach (based on the use of a moving substructure, which is also modeled by a system of boundary rod elements) permits maximally reducing the number of unknowns in the resolving system of equations at each step of its solution [11]. The authors of the preceding investigations of this problem, when studying the simultaneous vibrations of bridges and moving loads, considered only the case in which the rolling stock was represented by sufficiently complicated systems of rigid bodies connected by viscoelastic links [3-18] and the rolling stock motion was described by systems of ordinary differential equations. A specific characteristic of the proposed method is that it is convenient to derive the equations of motion of both the rolling stock and the bridge structure. The method [9, 14] permits obtaining the equations of interaction between the structures as two separate finite-element structures. Hence the researcher need not traditionally write out the system of equations of motion, for example, for the rolling stock (of cars) with finitely many degrees of freedom [3-18].We note several papers where simultaneous vibrations of an elastic moving load and an elastic carrying structure are considered in a rather narrow region and have a specific character. For example, the motion of an elastic rod along an elastic infinite rod on an elastic foundation is studied in [20], and the body of a car moving along a beam is considered as a rod with ten concentrated masses in [21].

  6. Machine vision application in animal trajectory tracking.

    PubMed

    Koniar, Dušan; Hargaš, Libor; Loncová, Zuzana; Duchoň, František; Beňo, Peter

    2016-04-01

    This article was motivated by the doctors' demand to make a technical support in pathologies of gastrointestinal tract research [10], which would be based on machine vision tools. Proposed solution should be less expensive alternative to already existing RF (radio frequency) methods. The objective of whole experiment was to evaluate the amount of animal motion dependent on degree of pathology (gastric ulcer). In the theoretical part of the article, several methods of animal trajectory tracking are presented: two differential methods based on background subtraction, the thresholding methods based on global and local threshold and the last method used for animal tracking was the color matching with a chosen template containing a searched spectrum of colors. The methods were tested offline on five video samples. Each sample contained situation with moving guinea pig locked in a cage under various lighting conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. KSC-2009-5198

    NASA Image and Video Library

    2009-09-12

    CAPE CANAVERAL, Fla. – Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station in Florida, workers check the progress of the fairing being moved toward the Space Tracking and Surveillance System – Demonstrator spacecraft for encapsulation. The fairing is a two-part molded structure that fits flush with the outside surface of the rocket and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-4934 (09-22-09) Photo credit: NASA/Cory Huston

  8. KSC-2009-5205

    NASA Image and Video Library

    2009-09-12

    CAPE CANAVERAL, Fla. – Inside the mobile service tower on Launch Pad 17-B at Cape Canaveral Air Force Station in Florida, the second half of the fairing is being moved toward the Space Tracking and Surveillance System – Demonstrator spacecraft. The fairing is a two-part molded structure that fits flush with the outside surface of the rocket and forms an aerodynamically smooth nose cone, protecting the spacecraft during launch and ascent. STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detection, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-4934 (09-22-09) Photo credit: NASA/Cory Huston

  9. Research on target tracking in coal mine based on optical flow method

    NASA Astrophysics Data System (ADS)

    Xue, Hongye; Xiao, Qingwei

    2015-03-01

    To recognize, track and count the bolting machine in coal mine video images, a real-time target tracking method based on the Lucas-Kanade sparse optical flow is proposed in this paper. In the method, we judge whether the moving target deviate from its trajectory, predicate and correct the position of the moving target. The method solves the problem of failure to track the target or lose the target because of the weak light, uneven illumination and blocking. Using the VC++ platform and Opencv lib we complete the recognition and tracking. The validity of the method is verified by the result of the experiment.

  10. Analysis of alternative means of transporting heavy tracked vehicles at Fort Hood, Texas

    DOT National Transportation Integrated Search

    1987-08-01

    The problem addressed in this report is a transportation problem--Given that a volume of heavy tracked vehicles must be moved from storage and maintenance locations to field training and other locations, what is the best way to move them? The options...

  11. Real-Time Adaptation of Decision Thresholds in Sensor Networks for Detection of Moving Targets (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    target kinematics for multiple sensor detections is referred to as the track - before - detect strategy, and is commonly adopted in multi-sensor surveillance...of moving targets. Wettergren [4] presented an application of track - before - detect strategies to undersea distributed sensor networks. In de- signing...the deployment of a distributed passive sensor network that employs this track - before - detect procedure, it is impera- tive that the placement of

  12. Influence of gait mode and body orientation on following a walking avatar.

    PubMed

    Meerhoff, L Rens A; de Poel, Harjo J; Jowett, Tim W D; Button, Chris

    2017-08-01

    Regulating distance with a moving object or person is a key component of human movement and of skillful interpersonal coordination. The current set of experiments aimed to assess the role of gait mode and body orientation on distance regulation using a cyclical locomotor tracking task in which participants followed a virtual leader. In the first experiment, participants moved in the backward-forward direction while the body orientation of the virtual leader was manipulated (i.e., facing towards, or away from the follower), hence imposing an incongruence in gait mode between leader and follower. Distance regulation was spatially less accurate when followers walked backwards. Additionally, a clear trade-off was found between spatial leader-follower accuracy and temporal synchrony. Any perceptual effects were overshadowed by the effect of one's gait mode. In the second experiment we examined lateral following. The results suggested that lateral following was also constrained strongly by perceptual information presented by the leader. Together, these findings demonstrated how locomotor tracking depends on gait mode, but also on the body orientation of whoever is being followed. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Digital-Electronic/Optical Apparatus Would Recognize Targets

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1994-01-01

    Proposed automatic target-recognition apparatus consists mostly of digital-electronic/optical cross-correlator that processes infrared images of targets. Infrared images of unknown targets correlated quickly with images of known targets. Apparatus incorporates some features of correlator described in "Prototype Optical Correlator for Robotic Vision System" (NPO-18451), and some of correlator described in "Compact Optical Correlator" (NPO-18473). Useful in robotic system; to recognize and track infrared-emitting, moving objects as variously shaped hot workpieces on conveyor belt.

  14. A ground moving target emergency tracking method for catastrophe rescue

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Li, D.; Li, G.

    2014-11-01

    In recent years, great disasters happen now and then. Disaster management test the emergency operation ability of the government and society all over the world. Immediately after the occurrence of a great disaster (e.g., earthquake), a massive nationwide rescue and relief operation need to be kicked off instantly. In order to improve the organizations efficiency of the emergency rescue, the organizers need to take charge of the information of the rescuer teams, including the real time location, the equipment with the team, the technical skills of the rescuers, and so on. One of the key factors for the success of emergency operations is the real time location of the rescuers dynamically. Real time tracking methods are used to track the professional rescuer teams now. But volunteers' participation play more and more important roles in great disasters. However, real time tracking of the volunteers will cause many problems, e.g., privacy leakage, expensive data consumption, etc. These problems may reduce the enthusiasm of volunteers' participation for catastrophe rescue. In fact, the great disaster is just small probability event, it is not necessary to track the volunteers (even rescuer teams) every time every day. In order to solve this problem, a ground moving target emergency tracking method for catastrophe rescue is presented in this paper. In this method, the handheld devices using GPS technology to provide the location of the users, e.g., smart phone, is used as the positioning equipment; an emergency tracking information database including the ID of the ground moving target (including the rescuer teams and volunteers), the communication number of the handheld devices with the moving target, and the usually living region, etc., is built in advance by registration; when catastrophe happens, the ground moving targets that living close to the disaster area will be filtered by the usually living region; then the activation short message will be sent to the selected ground moving target through the communication number of the handheld devices. The handheld devices receive and identify the activation short message, and send the current location information to the server. Therefore, the emergency tracking mode is triggered. The real time location of the filtered target can be shown on the organizer's screen, and the organizer can assign the rescue tasks to the rescuer teams and volunteers based on their real time location. The ground moving target emergency tracking prototype system is implemented using Oracle 11g, Visual Studio 2010 C#, Android, SMS Modem, and Google Maps API.

  15. Interaction between a railway track and uniformly moving tandem wheels

    NASA Astrophysics Data System (ADS)

    Belotserkovskiy, P. M.

    2006-12-01

    Interaction among loaded wheels via railway track is studied. The vertical parametric oscillations of an infinite row of identical equally spaced wheels, bearing constant load and uniformly moving over a railway track, are calculated by means of Fourier series technique. If the distance between two consecutive wheels is big enough, then one can disregard their interaction via the railway track and consider every wheel as a single one. In this case, however, the Fourier series technique represents an appropriate computation time-saving approximation to a Fourier integral transformation technique that describes the oscillations of a single moving wheel. Two schemes are considered. In the first scheme, every wheel bears the same load. In the second one, consecutive wheels bear contrarily directed loads of the same magnitude. The second scheme leads to simpler calculations and so is recommended to model the wheel-track interaction. The railway track periodicity due to sleeper spacing is taken into account. Each period is the track segment between two adjacent sleepers. A partial differential equation with constant coefficients governs the vertical oscillations of each segment. Boundary conditions bind the oscillations of two neighbour segments and provide periodicity to the track. The shear deformation in the rail cross-section strongly influences the parametric oscillations. It also causes discontinuity of the rail centre-line slope at any point, where a concentrated transverse force is applied. Therefore, Timoshenko beam properties with respect to the topic of this paper are discussed. Interaction between a railway track and a bogie moving at moderate speed is studied. The study points to influence of the bogie frame oscillations on variation in the wheel-rail contact force over the sleeper span. The simplified bogie model considered includes only the primary suspension. A static load applied to the bogie frame centre presents the vehicle body.

  16. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  17. Complex Versus Simple Ankle Movement Training in Stroke Using Telerehabilitation: A Randomized Controlled Trial

    PubMed Central

    Deng, Huiqiong; Durfee, William K.; Nuckley, David J.; Rheude, Brandon S.; Severson, Amy E.; Skluzacek, Katie M.; Spindler, Kristen K.; Davey, Cynthia S.

    2012-01-01

    Background Telerehabilitation allows rehabilitative training to continue remotely after discharge from acute care and can include complex tasks known to create rich conditions for neural change. Objectives The purposes of this study were: (1) to explore the feasibility of using telerehabilitation to improve ankle dorsiflexion during the swing phase of gait in people with stroke and (2) to compare complex versus simple movements of the ankle in promoting behavioral change and brain reorganization. Design This study was a pilot randomized controlled trial. Setting Training was done in the participant's home. Testing was done in separate research labs involving functional magnetic resonance imaging (fMRI) and multi-camera gait analysis. Patients Sixteen participants with chronic stroke and impaired ankle dorsiflexion were assigned randomly to receive 4 weeks of telerehabilitation of the paretic ankle. Intervention Participants received either computerized complex movement training (track group) or simple movement training (move group). Measurements Behavioral changes were measured with the 10-m walk test and gait analysis using a motion capture system. Brain reorganization was measured with ankle tracking during fMRI. Results Dorsiflexion during gait was significantly larger in the track group compared with the move group. For fMRI, although the volume, percent volume, and intensity of cortical activation failed to show significant changes, the frequency count of the number of participants showing an increase versus a decrease in these values from pretest to posttest measurements was significantly different between the 2 groups, with the track group decreasing and the move group increasing. Limitations Limitations of this study were that no follow-up test was conducted and that a small sample size was used. Conclusions The results suggest that telerehabilitation, emphasizing complex task training with the paretic limb, is feasible and can be effective in promoting further dorsiflexion in people with chronic stroke. PMID:22095209

  18. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; Macleod, Todd; Gagliano, Larry

    2015-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  19. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  20. Aerial video mosaicking using binary feature tracking

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2015-05-01

    Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point's image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.

  1. Object acquisition and tracking for space-based surveillance

    NASA Astrophysics Data System (ADS)

    1991-11-01

    This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.

  2. Enhanced object-based tracking algorithm for convective rain storms and cells

    NASA Astrophysics Data System (ADS)

    Muñoz, Carlos; Wang, Li-Pen; Willems, Patrick

    2018-03-01

    This paper proposes a new object-based storm tracking algorithm, based upon TITAN (Thunderstorm Identification, Tracking, Analysis and Nowcasting). TITAN is a widely-used convective storm tracking algorithm but has limitations in handling small-scale yet high-intensity storm entities due to its single-threshold identification approach. It also has difficulties to effectively track fast-moving storms because of the employed matching approach that largely relies on the overlapping areas between successive storm entities. To address these deficiencies, a number of modifications are proposed and tested in this paper. These include a two-stage multi-threshold storm identification, a new formulation for characterizing storm's physical features, and an enhanced matching technique in synergy with an optical-flow storm field tracker, as well as, according to these modifications, a more complex merging and splitting scheme. High-resolution (5-min and 529-m) radar reflectivity data for 18 storm events over Belgium are used to calibrate and evaluate the algorithm. The performance of the proposed algorithm is compared with that of the original TITAN. The results suggest that the proposed algorithm can better isolate and match convective rainfall entities, as well as to provide more reliable and detailed motion estimates. Furthermore, the improvement is found to be more significant for higher rainfall intensities. The new algorithm has the potential to serve as a basis for further applications, such as storm nowcasting and long-term stochastic spatial and temporal rainfall generation.

  3. Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-11-27

    This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less

  4. Laminated track design for inductrack maglev systems

    DOEpatents

    Post, Richard F.

    2004-07-06

    A magnet configuration comprising a pair of Halbach arrays magnetically and structurally connected together are positioned with respect to each other so that a first component of their fields substantially cancels at a first plane between them, and a second component of their fields substantially adds at this first plane. A track is located between the pair of Halbach arrays and a propulsion mechanism is provided for moving the pair of Halbach arrays along the track. When the pair of Halbach arrays move along the track and the track is not located at the first plane, a current is induced in the windings and a restoring force is exerted on the pair of Halbach arrays.

  5. Tracking Object Existence From an Autonomous Patrol Vehicle

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Scharenbroich, Lucas

    2011-01-01

    An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.

  6. Independent motion detection with a rival penalized adaptive particle filter

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Hübner, Wolfgang; Arens, Michael

    2014-10-01

    Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.

  7. Haptic Tracking Permits Bimanual Independence

    ERIC Educational Resources Information Center

    Rosenbaum, David A.; Dawson, Amanda A.; Challis, John H.

    2006-01-01

    This study shows that in a novel task--bimanual haptic tracking--neurologically normal human adults can move their 2 hands independently for extended periods of time with little or no training. Participants lightly touched buttons whose positions were moved either quasi-randomly in the horizontal plane by 1 or 2 human drivers (Experiment 1), in…

  8. Compensating For Movement Of Eye In Laser Surgery

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1991-01-01

    Conceptual system for laser surgery of retina includes subsystem that tracks position of retina. Tracking signal used to control galvanometer-driven mirrors keeping laser aimed at desired spot on retina as eye moves. Alternatively or additionally, indication of position used to prevent firing of laser when eye moved too far from proper aiming position.

  9. Out of Reach, Out of Mind? Infants' Comprehension of References to Hidden Inaccessible Objects.

    PubMed

    Osina, Maria A; Saylor, Megan M; Ganea, Patricia A

    2017-09-01

    This study investigated the nature of infants' difficulty understanding references to hidden inaccessible objects. Twelve-month-old infants (N = 32) responded to the mention of objects by looking at, pointing at, or approaching them when the referents were visible or accessible, but not when they were hidden and inaccessible (Experiment I). Twelve-month-olds (N = 16) responded robustly when a container with the hidden referent was moved from a previously inaccessible position to an accessible position before the request, but failed to respond when the reverse occurred (Experiment II). This suggests that infants might be able to track the hidden object's dislocations and update its accessibility as it changes. Knowing the hidden object is currently inaccessible inhibits their responding. Older, 16-month-old (N = 17) infants' performance was not affected by object accessibility. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  10. Observations of interplanetary dust by the Juno magnetometer investigation

    NASA Astrophysics Data System (ADS)

    Benn, M.; Jorgensen, J. L.; Denver, T.; Brauer, P.; Jorgensen, P. S.; Andersen, A. C.; Connerney, J. E. P.; Oliversen, R.; Bolton, S. J.; Levin, S.

    2017-05-01

    One of the Juno magnetometer investigation's star cameras was configured to search for unidentified objects during Juno's transit en route to Jupiter. This camera detects and registers luminous objects to magnitude 8. Objects persisting in more than five consecutive images and moving with an apparent angular rate of between 2 and 18,000 arcsec/s were recorded. Among the objects detected were a small group of objects tracked briefly in close proximity to the spacecraft. The trajectory of these objects demonstrates that they originated on the Juno spacecraft, evidently excavated by micrometeoroid impacts on the solar arrays. The majority of detections occurred just prior to and shortly after Juno's transit of the asteroid belt. This rather novel detection technique utilizes the Juno spacecraft's prodigious 60 m2 of solar array as a dust detector and provides valuable information on the distribution and motion of interplanetary (>μm sized) dust.

  11. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  12. Motion tracing system for ultrasound guided HIFU

    NASA Astrophysics Data System (ADS)

    Xiao, Xu; Jiang, Tingyi; Corner, George; Huang, Zhihong

    2017-03-01

    One main limitation in HIFU treatment is the abdominal movement in liver and kidney caused by respiration. The study has set up a tracking model which mainly compromises of a target carrying box and a motion driving balloon. A real-time B-mode ultrasound guidance method suitable for tracking of the abdominal organ motion in 2D was established and tested. For the setup, the phantoms mimicking moving organs are carefully prepared with agar surrounding round-shaped egg-white as the target of focused ultrasound ablation. Physiological phantoms and animal tissues are driven moving reciprocally along the main axial direction of the ultrasound image probe with slightly motion perpendicular to the axial direction. The moving speed and range could be adjusted by controlling the inflation and deflation speed and amount of the balloon driven by a medical ventilator. A 6-DOF robotic arm was used to position the focused ultrasound transducer. The overall system was trying to estimate to simulate the actual movement caused by human respiration. HIFU ablation experiments using phantoms and animal organs were conducted to test the tracking effect. Ultrasound strain elastography was used to post estimate the efficiency of the tracking algorithms and system. In moving state, the axial size of the lesion (perpendicular to the movement direction) are averagely 4mm, which is one third larger than the lesion got when the target was not moving. This presents the possibility of developing a low-cost real-time method of tracking organ motion during HIFU treatment in liver or kidney.

  13. Modulation of high-frequency vestibuloocular reflex during visual tracking in humans

    NASA Technical Reports Server (NTRS)

    Das, V. E.; Leigh, R. J.; Thomas, C. W.; Averbuch-Heller, L.; Zivotofsky, A. Z.; Discenna, A. O.; Dell'Osso, L. F.

    1995-01-01

    1. Humans may visually track a moving object either when they are stationary or in motion. To investigate visual-vestibular interaction during both conditions, we compared horizontal smooth pursuit (SP) and active combined eye-head tracking (CEHT) of a target moving sinusoidally at 0.4 Hz in four normal subjects while the subjects were either stationary or vibrated in yaw at 2.8 Hz. We also measured the visually enhanced vestibuloocular reflex (VVOR) during vibration in yaw at 2.8 Hz over a peak head velocity range of 5-40 degrees/s. 2. We found that the gain of the VVOR at 2.8 Hz increased in all four subjects as peak head velocity increased (P < 0.001), with minimal phase changes, such that mean retinal image slip was held below 5 degrees/s. However, no corresponding modulation in vestibuloocular reflex gain occurred with increasing peak head velocity during a control condition when subjects were rotated in darkness. 3. During both horizontal SP and CEHT, tracking gains were similar, and the mean slip speed of the target's image on the retina was held below 5.5 degrees/s whether subjects were stationary or being vibrated at 2.8 Hz. During both horizontal SP and CEHT of target motion at 0.4 Hz, while subjects were vibrated in yaw, VVOR gain for the 2.8-Hz head rotations was similar to or higher than that achieved during fixation of a stationary target. This is in contrast to the decrease of VVOR gain that is reported while stationary subjects perform CEHT.(ABSTRACT TRUNCATED AT 250 WORDS).

  14. Comparison of Predictable Smooth Ocular and Combined Eye-Head Tracking Behaviour in Patients with Lesions Affecting the Brainstem and Cerebellum

    NASA Technical Reports Server (NTRS)

    Grant, Michael P.; Leigh, R. John; Seidman, Scott H.; Riley, David E.; Hanna, Joseph P.

    1992-01-01

    We compared the ability of eight normal subjects and 15 patients with brainstem or cerebellar disease to follow a moving visual stimulus smoothly with either the eyes alone or with combined eye-head tracking. The visual stimulus was either a laser spot (horizontal and vertical planes) or a large rotating disc (torsional plane), which moved at one sinusoidal frequency for each subject. The visually enhanced Vestibulo-Ocular Reflex (VOR) was also measured in each plane. In the horizontal and vertical planes, we found that if tracking gain (gaze velocity/target velocity) for smooth pursuit was close to 1, the gain of combined eye-hand tracking was similar. If the tracking gain during smooth pursuit was less than about 0.7, combined eye-head tracking was usually superior. Most patients, irrespective of diagnosis, showed combined eye-head tracking that was superior to smooth pursuit; only two patients showed the converse. In the torsional plane, in which optokinetic responses were weak, combined eye-head tracking was much superior, and this was the case in both subjects and patients. We found that a linear model, in which an internal ocular tracking signal cancelled the VOR, could account for our findings in most normal subjects in the horizontal and vertical planes, but not in the torsional plane. The model failed to account for tracking behaviour in most patients in any plane, and suggested that the brain may use additional mechanisms to reduce the internal gain of the VOR during combined eye-head tracking. Our results confirm that certain patients who show impairment of smooth-pursuit eye movements preserve their ability to smoothly track a moving target with combined eye-head tracking.

  15. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  16. A data processing method based on tracking light spot for the laser differential confocal component parameters measurement system

    NASA Astrophysics Data System (ADS)

    Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin

    2013-12-01

    We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.

  17. Dynamical features of hazardous near-Earth objects

    NASA Astrophysics Data System (ADS)

    Emel'yanenko, V. V.; Naroenkov, S. A.

    2015-07-01

    We discuss the dynamical features of near-Earth objects moving in dangerous proximity to Earth. We report the computation results for the motions of all observed near-Earth objects over a 600-year-long time period: 300 years in the past and 300 years in the future. We analyze the dynamical features of Earth-approaching objects. In particular, we established that the observed distribution of geocentric velocities of dangerous objects depends on their size. No bodies with geocentric velocities smaller that 5 kms-1 have been found among hazardous objects with absolute magnitudes H <18, whereas 9% of observed objects with H <27 pass near Earth moving at such velocities. On the other hand, we found a tendency for geocentric velocities to increase at H >29. We estimated the distribution of absolute magnitudes of hazardous objects based on our analysis of the data for the asteroids that have passed close to Earth. We inferred the Earth-impact frequencies for objects of different sizes. Impacts of objects with H <18 with Earth occur on average once every 0.53 Myr, and impacts of objects with H <27—once every 130-240 years. We show that currently about 0.1% of all near-Earth objects with diameters greater than 10 m have been discovered. We point out the discrepancies between the estimates of impact rates of Chelyabinsk-type objects, determined from fireball observations and from the data of telescopic asteroid tracking surveys. These estimates can be reconciled assuming that Chelyabinsk-sized asteroids have very low albedos (about 0.02 on average).

  18. A stochastic model for tropical cyclone tracks based on Reanalysis data and GCM output

    NASA Astrophysics Data System (ADS)

    Ito, K.; Nakano, S.; Ueno, G.

    2014-12-01

    In the present study, we try to express probability distribution of tropical cyclone (TC) trajectories estimated on the basis of GCM output. The TC tracks are mainly controlled by the atmospheric circulation such as the trade winds and the Westerlies as well as are influenced to move northward by the Beta effect. The TC tracks, which calculated with trajectory analysis, would thus correspond to the movement of TCs due to the atmospheric circulation. Comparing the result of the trajectory analysis from reanalysis data with the Best Track (BT) of TC in the present climate, the structure of the trajectory seems to be similar to the BT. However, here is a significant problem for the calculation of a trajectory in the reanalysis wind field because there are many rotation elements including TCs in the reanalysis data. We assume that a TC would move along the steering current and the rotations would not have a great influence on the direction of moving. We are designing a state-space model based on the trajectory analysis and put an adjustment parameter for the moving vector. Here, a simple track generation model is developed. This model has a possibility to gain the probability distributions of calculated TC tracks by fitting to the BT using data assimilation. This work was conducted under the framework of the "Development of Basic Technology for Risk Information on Climate Change" supported by the SOUSEI Program of the Ministry of Education, Culture, Sports, Science, and Technology.

  19. Intelligence-aided multitarget tracking for urban operations - a case study: counter terrorism

    NASA Astrophysics Data System (ADS)

    Sathyan, T.; Bharadwaj, K.; Sinha, A.; Kirubarajan, T.

    2006-05-01

    In this paper, we present a framework for tracking multiple mobile targets in an urban environment based on data from multiple sources of information, and for evaluating the threat these targets pose to assets of interest (AOI). The motivating scenario is one where we have to track many targets, each with different (unknown) destinations and/or intents. The tracking algorithm is aided by information about the urban environment (e.g., road maps, buildings, hideouts), and strategic and intelligence data. The tracking algorithm needs to be dynamic in that it has to handle a time-varying number of targets and the ever-changing urban environment depending on the locations of the moving objects and AOI. Our solution uses the variable structure interacting multiple model (VS-IMM) estimator, which has been shown to be effective in tracking targets based on road map information. Intelligence information is represented as target class information and incorporated through a combined likelihood calculation within the VS-IMM estimator. In addition, we develop a model to calculate the probability that a particular target can attack a given AOI. This model for the calculation of the probability of attack is based on the target kinematic and class information. Simulation results are presented to demonstrate the operation of the proposed framework on a representative scenario.

  20. Development of a four-axis moving phantom for patient-specific QA of surrogate signal-based tracking IMRT.

    PubMed

    Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Yamada, Masahiro; Takahashi, Kunio; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Kaneko, Shuji; Nakamura, Akira; Itasaka, Satoshi; Matsuo, Yukinori; Mizowaki, Takashi; Kokubo, Masaki; Hiraoka, Masahiro

    2016-12-01

    The purposes of this study were two-fold: first, to develop a four-axis moving phantom for patient-specific quality assurance (QA) in surrogate signal-based dynamic tumor-tracking intensity-modulated radiotherapy (DTT-IMRT), and second, to evaluate the accuracy of the moving phantom and perform patient-specific dosimetric QA of the surrogate signal-based DTT-IMRT. The four-axis moving phantom comprised three orthogonal linear actuators for target motion and a fourth one for surrogate motion. The positional accuracy was verified using four laser displacement gauges under static conditions (±40 mm displacements along each axis) and moving conditions [eight regular sinusoidal and fourth-power-of-sinusoidal patterns with peak-to-peak motion ranges (H) of 10-80 mm and a breathing period (T) of 4 s, and three irregular respiratory patterns with H of 1.4-2.5 mm in the left-right, 7.7-11.6 mm in the superior-inferior, and 3.1-4.2 mm in the anterior-posterior directions for the target motion, and 4.8-14.5 mm in the anterior-posterior direction for the surrogate motion, and T of 3.9-4.9 s]. Furthermore, perpendicularity, defined as the vector angle between any two axes, was measured using an optical measurement system. The reproducibility of the uncertainties in DTT-IMRT was then evaluated. Respiratory motions from 20 patients acquired in advance were reproduced and compared three-dimensionally with the originals. Furthermore, patient-specific dosimetric QAs of DTT-IMRT were performed for ten pancreatic cancer patients. The doses delivered to Gafchromic films under tracking and moving conditions were compared with those delivered under static conditions without dose normalization. Positional errors of the moving phantom under static and moving conditions were within 0.05 mm. The perpendicularity of the moving phantom was within 0.2° of 90°. The differences in prediction errors between the original and reproduced respiratory motions were -0.1 ± 0.1 mm for the lateral direction, -0.1 ± 0.2 mm for the superior-inferior direction, and -0.1 ± 0.1 mm for the anterior-posterior direction. The dosimetric accuracy showed significant improvements, of 92.9% ± 4.0% with tracking versus 69.8% ± 7.4% without tracking, in the passing rates of γ with the criterion of 3%/1 mm (p < 0.001). Although the dosimetric accuracy of IMRT without tracking showed a significant negative correlation with the 3D motion range of the target (r = - 0.59, p < 0.05), there was no significant correlation for DTT-IMRT (r = 0.03, p = 0.464). The developed four-axis moving phantom had sufficient accuracy to reproduce patient respiratory motions, allowing patient-specific QA of the surrogate signal-based DTT-IMRT under realistic conditions. Although IMRT without tracking decreased the dosimetric accuracy as the target motion increased, the DTT-IMRT achieved high dosimetric accuracy.

  1. Management of three-dimensional intrafraction motion through real-time DMLC tracking.

    PubMed

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-05-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion.

  2. Object detection and tracking system

    DOEpatents

    Ma, Tian J.

    2017-05-30

    Methods and apparatuses for analyzing a sequence of images for an object are disclosed herein. In a general embodiment, the method identifies a region of interest in the sequence of images. The object is likely to move within the region of interest. The method divides the region of interest in the sequence of images into sections and calculates signal-to-noise ratios for a section in the sections. A signal-to-noise ratio for the section is calculated using the section in the image, a prior section in a prior image to the image, and a subsequent section in a subsequent image to the image. The signal-to-noise ratios are for potential velocities of the object in the section. The method also selects a velocity from the potential velocities for the object in the section using a potential velocity in the potential velocities having a highest signal-to-noise ratio in the signal-to-noise ratios.

  3. Radar signature generation for feature-aided tracking research

    NASA Astrophysics Data System (ADS)

    Piatt, Teri L.; Sherwood, John U.; Musick, Stanton H.

    2005-05-01

    Accurately associating sensor kinematic reports to known tracks, new tracks, or clutter is one of the greatest obstacles to effective track estimation. Feature-aiding is one technology that is emerging to address this problem, and it is expected that adding target features will aid report association by enhancing track accuracy and lengthening track life. The Sensor's Directorate of the Air Force Research Laboratory is sponsoring a challenge problem called Feature-Aided Tracking of Stop-move Objects (FATSO). The long-range goal of this research is to provide a full suite of public data and software to encourage researchers from government, industry, and academia to participate in radar-based feature-aided tracking research. The FATSO program is currently releasing a vehicle database coupled to a radar signature generator. The completed FATSO system will incorporate this database/generator into a Monte Carlo simulation environment for evaluating multiplatform/multitarget tracking scenarios. The currently released data and software contains the following: eight target models, including a tank, ammo hauler, and self-propelled artillery vehicles; and a radar signature generator capable of producing SAR and HRR signatures of all eight modeled targets in almost any configuration or articulation. In addition, the signature generator creates Z-buffer data, label map data, and radar cross-section prediction and allows the user to add noise to an image while varying sensor-target geometry (roll, pitch, yaw, squint). Future capabilities of this signature generator, such as scene models and EO signatures as well as details of the complete FATSO testbed, are outlined.

  4. Solar tracking system

    DOEpatents

    Okandan, Murat; Nielson, Gregory N.

    2016-07-12

    Solar tracking systems, as well as methods of using such solar tracking systems, are disclosed. More particularly, embodiments of the solar tracking systems include lateral supports horizontally positioned between uprights to support photovoltaic modules. The lateral supports may be raised and lowered along the uprights or translated to cause the photovoltaic modules to track the moving sun.

  5. SU-G-BRA-17: Tracking Multiple Targets with Independent Motion in Real-Time Using a Multi-Leaf Collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Y; Keall, P; Poulsen, P

    Purpose: Multiple targets with large intrafraction independent motion are often involved in advanced prostate, lung, abdominal, and head and neck cancer radiotherapy. Current standard of care treats these with the originally planned fields, jeopardizing the treatment outcomes. A real-time multi-leaf collimator (MLC) tracking method has been developed to address this problem for the first time. This study evaluates the geometric uncertainty of the multi-target tracking method. Methods: Four treatment scenarios are simulated based on a prostate IMAT plan to treat a moving prostate target and static pelvic node target: 1) real-time multi-target MLC tracking; 2) real-time prostate-only MLC tracking; 3)more » correcting for prostate interfraction motion at setup only; and 4) no motion correction. The geometric uncertainty of the treatment is assessed by the sum of the erroneously underexposed target area and overexposed healthy tissue areas for each individual target. Two patient-measured prostate trajectories of average 2 and 5 mm motion magnitude are used for simulations. Results: Real-time multi-target tracking accumulates the least uncertainty overall. As expected, it covers the static nodes similarly well as no motion correction treatment and covers the moving prostate similarly well as the real-time prostate-only tracking. Multi-target tracking reduces >90% of uncertainty for the static nodal target compared to the real-time prostate-only tracking or interfraction motion correction. For prostate target, depending on the motion trajectory which affects the uncertainty due to leaf-fitting, multi-target tracking may or may not perform better than correcting for interfraction prostate motion by shifting patient at setup, but it reduces ∼50% of uncertainty compared to no motion correction. Conclusion: The developed real-time multi-target MLC tracking can adapt for the independently moving targets better than other available treatment adaptations. This will enable PTV margin reduction to minimize health tissue toxicity while remain tumor coverage when treating advanced disease with independently moving targets involved. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship and NHMRC Project Grant No. APP1042375.« less

  6. Vehicle track interaction safety standards

    DOT National Transportation Integrated Search

    2014-04-02

    Vehicle/Track Interaction (VTI) Safety Standards aim to : reduce the risk of derailments and other accidents attributable : to the dynamic interaction between moving vehicles and the : track over which they operate. On March 13, 2013, the Federal : R...

  7. Feature-aided multiple target tracking in the image plane

    NASA Astrophysics Data System (ADS)

    Brown, Andrew P.; Sullivan, Kevin J.; Miller, David J.

    2006-05-01

    Vast quantities of EO and IR data are collected on airborne platforms (manned and unmanned) and terrestrial platforms (including fixed installations, e.g., at street intersections), and can be exploited to aid in the global war on terrorism. However, intelligent preprocessing is required to enable operator efficiency and to provide commanders with actionable target information. To this end, we have developed an image plane tracker which automatically detects and tracks multiple targets in image sequences using both motion and feature information. The effects of platform and camera motion are compensated via image registration, and a novel change detection algorithm is applied for accurate moving target detection. The contiguous pixel blob on each moving target is segmented for use in target feature extraction and model learning. Feature-based target location measurements are used for tracking through move-stop-move maneuvers, close target spacing, and occlusion. Effective clutter suppression is achieved using joint probabilistic data association (JPDA), and confirmed target tracks are indicated for further processing or operator review. In this paper we describe the algorithms implemented in the image plane tracker and present performance results obtained with video clips from the DARPA VIVID program data collection and from a miniature unmanned aerial vehicle (UAV) flight.

  8. A rational fraction polynomials model to study vertical dynamic wheel-rail interaction

    NASA Astrophysics Data System (ADS)

    Correa, N.; Vadillo, E. G.; Santamaria, J.; Gómez, J.

    2012-04-01

    This paper presents a model designed to study vertical interactions between wheel and rail when the wheel moves over a rail welding. The model focuses on the spatial domain, and is drawn up in a simple fashion from track receptances. The paper obtains the receptances from a full track model in the frequency domain already developed by the authors, which includes deformation of the rail section and propagation of bending, elongation and torsional waves along an infinite track. Transformation between domains was secured by applying a modified rational fraction polynomials method. This obtains a track model with very few degrees of freedom, and thus with minimum time consumption for integration, with a good match to the original model over a sufficiently broad range of frequencies. Wheel-rail interaction is modelled on a non-linear Hertzian spring, and consideration is given to parametric excitation caused by the wheel moving over a sleeper, since this is a moving wheel model and not a moving irregularity model. The model is used to study the dynamic loads and displacements emerging at the wheel-rail contact passing over a welding defect at different speeds.

  9. Contextual effects on motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  10. Vision requirements for Space Station applications

    NASA Technical Reports Server (NTRS)

    Crouse, K. R.

    1985-01-01

    Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.

  11. Investigation of kinematic features for dismount detection and tracking

    NASA Astrophysics Data System (ADS)

    Narayanaswami, Ranga; Tyurina, Anastasia; Diel, David; Mehra, Raman K.; Chinn, Janice M.

    2012-05-01

    With recent changes in threats and methods of warfighting and the use of unmanned aircrafts, ISR (Intelligence, Surveillance and Reconnaissance) activities have become critical to the military's efforts to maintain situational awareness and neutralize the enemy's activities. The identification and tracking of dismounts from surveillance video is an important step in this direction. Our approach combines advanced ultra fast registration techniques to identify moving objects with a classification algorithm based on both static and kinematic features of the objects. Our objective was to push the acceptable resolution beyond the capability of industry standard feature extraction methods such as SIFT (Scale Invariant Feature Transform) based features and inspired by it, SURF (Speeded-Up Robust Feature). Both of these methods utilize single frame images. We exploited the temporal component of the video signal to develop kinematic features. Of particular interest were the easily distinguishable frequencies characteristic of bipedal human versus quadrupedal animal motion. We examine limits of performance, frame rates and resolution required for human, animal and vehicles discrimination. A few seconds of video signal with the acceptable frame rate allow us to lower resolution requirements for individual frames as much as by a factor of five, which translates into the corresponding increase of the acceptable standoff distance between the sensor and the object of interest.

  12. Near-Earth Object Orbit Linking with the Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Vereš, Peter; Chesley, Steven R.

    2017-07-01

    We have conducted a detailed simulation of the ability of the Large Synoptic Survey Telescope (LSST) to link near-Earth and main belt asteroid detections into orbits. The key elements of the study were a high-fidelity detection model and the presence of false detections in the form of both statistical noise and difference image artifacts. We employed the Moving Object Processing System (MOPS) to generate tracklets, tracks, and orbits with a realistic detection density for one month of the LSST survey. The main goals of the study were to understand whether (a) the linking of near-Earth objects (NEOs) into orbits can succeed in a realistic survey, (b) the number of false tracks and orbits will be manageable, and (c) the accuracy of linked orbits would be sufficient for automated processing of discoveries and attributions. We found that the overall density of asteroids was more than 5000 per LSST field near opposition on the ecliptic, plus up to 3000 false detections per field in good seeing. We achieved 93.6% NEO linking efficiency for H< 22 on tracks composed of tracklets from at least three distinct nights within a 12 day interval. The derived NEO catalog was comprised of 96% correct linkages. Less than 0.1% of orbits included false detections, and the remainder of false linkages stemmed from main belt confusion, which was an artifact of the short time span of the simulation. The MOPS linking efficiency can be improved by refined attribution of detections to known objects and by improved tuning of the internal kd-tree linking algorithms.

  13. Cavitation during wire brushing

    NASA Astrophysics Data System (ADS)

    Li, Bo; Zou, Jun; Ji, Chen

    2016-11-01

    In our daily life, brush is often used to scrub the surface of objects, for example, teeth, pots, shoes, pool, etc. And cleaning rust and stripping paint are accomplished using wire brush. Wire brushes also can be used to clean the teeth for large animals, such as horses, crocodiles. By observing brushing process in water, we capture the cavitation phenomenon on the track of moving brush wire. It shows that the cavitation also can affect the surface. In order to take clear and entire pictures of cavity, a simplified model of one stainless steel wire brushing a boss is adopted in our experiment. A transparent organic tank filled with deionized water is used as a view box. And a high speed video camera is used to record the sequences. In experiment, ambient pressure is atmospheric pressure and deionized water temperature is kept at home temperature. An obvious beautiful flabellate cavity zone appears behind the moving steel wire. The fluctuation of pressure near cavity is recorded by a hydrophone. More movies and pictures are used to show the behaviors of cavitation bubble following a restoring wire. Beautiful tracking cavitation bubble cluster is captured and recorded to show.

  14. Tracking Snowballs

    NASA Image and Video Library

    2010-11-18

    Icy particles in the cloud around Hartley 2, as seen by NASA EPOXI mission spacecraft. A star moving through the background is marked with red and moves in a particular direction, with a particular speed; icy particles move in random directions.

  15. Visible and invisible displacement with dynamic visual occlusion in bottlenose dolphins (Tursiops spp).

    PubMed

    Johnson, Christine M; Sullivan, Jess; Buck, Cara L; Trexel, Julie; Scarpuzzi, Mike

    2015-01-01

    Anticipating the location of a temporarily obscured target-what Piaget (the construction of reality in the child. Basic Books, New York, 1954) called "object permanence"-is a critical skill, especially in hunters of mobile prey. Previous research with bottlenose dolphins found they could predict the location of a target that had been visibly displaced into an opaque container, but not one that was first placed in an opaque container and then invisibly displaced to another container. We tested whether, by altering the task to involve occlusion rather than containment, these animals could show more advanced object permanence skills. We projected dynamic visual displays at an underwater-viewing window and videotaped the animals' head moves while observing these displays. In Experiment 1, the animals observed a small black disk moving behind occluders that shifted in size, ultimately forming one large occluder. Nine out of ten subjects "tracked" the presumed movement of the disk behind this occluder on their first trial-and in a statistically significant number of subsequent trials-confirming their visible displacement abilities. In Experiment 2, we tested their invisible displacement abilities. The disk first disappeared behind a pair of moving occluders, which then moved behind a stationary occluder. The moving occluders then reappeared and separated, revealing that the disk was no longer behind them. The subjects subsequently looked to the correct stationary occluder on eight of their ten first trials, and in a statistically significant number of subsequent trials. Thus, by altering the stimuli to be more ecologically valid, we were able to show that the dolphins could indeed succeed at an invisible displacement task.

  16. Sequential Bayesian Filters for Estimating Time Series of Wrapped and Unwrapped Angles with Hyperparameter Estimation

    NASA Astrophysics Data System (ADS)

    Umehara, Hiroaki; Okada, Masato; Naruse, Yasushi

    2018-03-01

    The estimation of angular time series data is a widespread issue relating to various situations involving rotational motion and moving objects. There are two kinds of problem settings: the estimation of wrapped angles, which are principal values in a circular coordinate system (e.g., the direction of an object), and the estimation of unwrapped angles in an unbounded coordinate system such as for the positioning and tracking of moving objects measured by the signal-wave phase. Wrapped angles have been estimated in previous studies by sequential Bayesian filtering; however, the hyperparameters that are to be solved and that control the properties of the estimation model were given a priori. The present study establishes a procedure of hyperparameter estimation from the observation data of angles only, using the framework of Bayesian inference completely as the maximum likelihood estimation. Moreover, the filter model is modified to estimate the unwrapped angles. It is proved that without noise our model reduces to the existing algorithm of Itoh's unwrapping transform. It is numerically confirmed that our model is an extension of unwrapping estimation from Itoh's unwrapping transform to the case with noise.

  17. Inductrack magnet configuration

    DOEpatents

    Post, Richard Freeman

    2003-12-16

    A magnet configuration comprising a pair of Halbach arrays magnetically and structurally connected together are positioned with respect to each other so that a first component of their fields substantially cancels at a first plane between them, and a second component of their fields substantially adds at this first plane. A track of windings is located between the pair of Halbach arrays and a propulsion mechanism is provided for moving the pair of Halbach arrays along the track. When the pair of Halbach arrays move along the track and the track is not located at the first plane, a current is induced in the windings and a restoring force is exerted on the pair of Halbach arrays.

  18. Inductrack magnet configuration

    DOEpatents

    Post, Richard Freeman

    2003-10-14

    A magnet configuration comprising a pair of Halbach arrays magnetically and structurally connected together are positioned with respect to each other so that a first component of their fields substantially cancels at a first plane between them, and a second component of their fields substantially adds at this first plane. A track of windings is located between the pair of Halbach arrays and a propulsion mechanism is provided for moving the pair of Halbach arrays along the track. When the pair of Halbach arrays move along the track and the track is not located at the first plane, a current is induced in the windings and a restoring force is exerted on the pair of Halbach arrays.

  19. Development of an optical three-dimensional laser tracker using dual modulated laser diodes and a signal detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hau-Wei; Chen, Chieh-Li; Liu, Chien-Hung

    Laser trackers are widely used in industry for tasks such as the assembly of airplanes and automobiles, contour measurement, and robot calibration. However, laser trackers are expensive, and the corresponding solution procedure is very complex. The influence of measurement uncertainties is also significant. This study proposes a three-dimensional space position measurement system which consists of two tracking modules, a zero tracking angle return subsystem, and a target quadrant photodiode (QPD). The target QPD is placed on the object being tracked. The origin locking method is used to keep the rays on the origin of the target QPD. The position ofmore » the target QPD is determined using triangulation since the two laser rays are projected onto one QPD. Modulation and demodulation are utilized to separate the coupled positional values. The experiment results show that measurement errors in the X, Y, and Z directions are less than {+-}0.05% when the measured object was moved by 300, 300, and 200 mm in the X, Y, and Z axes, respectively. The theoretical measurement error estimated from the measurement model is between {+-}0.02% and {+-}0.07% within the defined measurable range. The proposed system can be applied to the measurements of machine tools and robot arms.« less

  20. Development of an optical three-dimensional laser tracker using dual modulated laser diodes and a signal detector.

    PubMed

    Lee, Hau-Wei; Chen, Chieh-Li; Liu, Chien-Hung

    2011-03-01

    Laser trackers are widely used in industry for tasks such as the assembly of airplanes and automobiles, contour measurement, and robot calibration. However, laser trackers are expensive, and the corresponding solution procedure is very complex. The influence of measurement uncertainties is also significant. This study proposes a three-dimensional space position measurement system which consists of two tracking modules, a zero tracking angle return subsystem, and a target quadrant photodiode (QPD). The target QPD is placed on the object being tracked. The origin locking method is used to keep the rays on the origin of the target QPD. The position of the target QPD is determined using triangulation since the two laser rays are projected onto one QPD. Modulation and demodulation are utilized to separate the coupled positional values. The experiment results show that measurement errors in the X, Y, and Z directions are less than ±0.05% when the measured object was moved by 300, 300, and 200 mm in the X, Y, and Z axes, respectively. The theoretical measurement error estimated from the measurement model is between ±0.02% and ±0.07% within the defined measurable range. The proposed system can be applied to the measurements of machine tools and robot arms.

  1. Development of an optical three-dimensional laser tracker using dual modulated laser diodes and a signal detector

    NASA Astrophysics Data System (ADS)

    Lee, Hau-Wei; Chen, Chieh-Li; Liu, Chien-Hung

    2011-03-01

    Laser trackers are widely used in industry for tasks such as the assembly of airplanes and automobiles, contour measurement, and robot calibration. However, laser trackers are expensive, and the corresponding solution procedure is very complex. The influence of measurement uncertainties is also significant. This study proposes a three-dimensional space position measurement system which consists of two tracking modules, a zero tracking angle return subsystem, and a target quadrant photodiode (QPD). The target QPD is placed on the object being tracked. The origin locking method is used to keep the rays on the origin of the target QPD. The position of the target QPD is determined using triangulation since the two laser rays are projected onto one QPD. Modulation and demodulation are utilized to separate the coupled positional values. The experiment results show that measurement errors in the X, Y, and Z directions are less than ±0.05% when the measured object was moved by 300, 300, and 200 mm in the X, Y, and Z axes, respectively. The theoretical measurement error estimated from the measurement model is between ±0.02% and ±0.07% within the defined measurable range. The proposed system can be applied to the measurements of machine tools and robot arms.

  2. VO-Compatible Architecture for Managing and Processing Images of Moving Celestial Bodies : Application to the Gaia-GBOT Project

    NASA Astrophysics Data System (ADS)

    Barache, C.; Bouquillon, S.; Carlucci, T.; Taris, F.; Michel, L.; Altmann, M.

    2013-10-01

    The Ground Based Optical Tracking (GBOT) group is a part of the Data Processing and Analysis Consortium, the large consortium of over 400 scientists from many European countries, charged with the scientific conduction of the Gaia mission by ESA. The GBOT group is in charge of the optical part of tracking of the Gaia satellite. This optical tracking is necessary to allow the Gaia mission to fully reach its goal in terms of astrometry precision level. These observations will be done daily, during the 5 years of the mission, with the use of optical CCD frames taken by a small network of 1-2m class telescopes located all over the world. The requirements for the accuracy on the satellite position determination, with respect of the stars in the field of view, are 20 mas. These optical satellite positions will be sent weekly by GBOT to the SOC of ESAC and used with other kinds of observations (radio ranging and Doppler) by MOC of ESOC to improve the Gaia ephemeris. For this purpose, we developed a set of accurate astrometry reduction programs specially adapted for tracking moving objects. The inputs of these programs for each tracked target are an ephemeris and a set of FITS images. The outputs for each image are: a file containing all information about the detected objects, a catalogue file used for calibration, a TIFF file for visual explanation of the reduction result, and an improvement of the fits image header. The final result is an overview file containing only the data related to the target extracted from all the images. These programs are written in GNU Fortran 95 and provide results in VOTable format (supported by Virtual Observatory protocols). All these results are sent automatically into the GBOT Database which is built with the SAADA freeware. The user of this Database can archive and query the data but also, thanks to the delegate option provided by SAADA, select a set of images and directly run the GBOT reduction programs with a dedicated Web interface. For more information about SAADA (an Automatic System for Astronomy Data Archive under GPL license and VOcompatible), see the related paper Michel et al. (2013).

  3. Robust Fusion of Color and Depth Data for RGB-D Target Tracking Using Adaptive Range-Invariant Depth Models and Spatio-Temporal Consistency Constraints.

    PubMed

    Xiao, Jingjing; Stolkin, Rustam; Gao, Yuqing; Leonardis, Ales

    2017-09-06

    This paper presents a novel robust method for single target tracking in RGB-D images, and also contributes a substantial new benchmark dataset for evaluating RGB-D trackers. While a target object's color distribution is reasonably motion-invariant, this is not true for the target's depth distribution, which continually varies as the target moves relative to the camera. It is therefore nontrivial to design target models which can fully exploit (potentially very rich) depth information for target tracking. For this reason, much of the previous RGB-D literature relies on color information for tracking, while exploiting depth information only for occlusion reasoning. In contrast, we propose an adaptive range-invariant target depth model, and show how both depth and color information can be fully and adaptively fused during the search for the target in each new RGB-D image. We introduce a new, hierarchical, two-layered target model (comprising local and global models) which uses spatio-temporal consistency constraints to achieve stable and robust on-the-fly target relearning. In the global layer, multiple features, derived from both color and depth data, are adaptively fused to find a candidate target region. In ambiguous frames, where one or more features disagree, this global candidate region is further decomposed into smaller local candidate regions for matching to local-layer models of small target parts. We also note that conventional use of depth data, for occlusion reasoning, can easily trigger false occlusion detections when the target moves rapidly toward the camera. To overcome this problem, we show how combining target information with contextual information enables the target's depth constraint to be relaxed. Our adaptively relaxed depth constraints can robustly accommodate large and rapid target motion in the depth direction, while still enabling the use of depth data for highly accurate reasoning about occlusions. For evaluation, we introduce a new RGB-D benchmark dataset with per-frame annotated attributes and extensive bias analysis. Our tracker is evaluated using two different state-of-the-art methodologies, VOT and object tracking benchmark, and in both cases it significantly outperforms four other state-of-the-art RGB-D trackers from the literature.

  4. Decoupled tracking and thermal monitoring of non-stationary targets.

    PubMed

    Tan, Kok Kiong; Zhang, Yi; Huang, Sunan; Wong, Yoke San; Lee, Tong Heng

    2009-10-01

    Fault diagnosis and predictive maintenance address pertinent economic issues relating to production systems as an efficient technique can continuously monitor key health parameters and trigger alerts when critical changes in these variables are detected, before they lead to system failures and production shutdowns. In this paper, we present a decoupled tracking and thermal monitoring system which can be used on non-stationary targets of closed systems such as machine tools. There are three main contributions from the paper. First, a vision component is developed to track moving targets under a monitor. Image processing techniques are used to resolve the target location to be tracked. Thus, the system is decoupled and applicable to closed systems without the need for a physical integration. Second, an infrared temperature sensor with a built-in laser for locating the measurement spot is deployed for non-contact temperature measurement of the moving target. Third, a predictive motion control system holds the thermal sensor and follows the moving target efficiently to enable continuous temperature measurement and monitoring.

  5. Keeping on Track: Performance Profiles of Low Performers in Academic Educational Tracks

    ERIC Educational Resources Information Center

    Reed, Helen C.; van Wesel, Floryt; Ouwehand, Carolijn; Jolles, Jelle

    2015-01-01

    In countries with high differentiation between academic and vocational education, an individual's future prospects are strongly determined by the educational track to which he or she is assigned. This large-scale, cross-sectional study focuses on low-performing students in academic tracks who face being moved to a vocational track. If more is…

  6. An Aggregated Method for Determining Railway Defects and Obstacle Parameters

    NASA Astrophysics Data System (ADS)

    Loktev, Daniil; Loktev, Alexey; Stepanov, Roman; Pevzner, Viktor; Alenov, Kanat

    2018-03-01

    The method of combining algorithms of image blur analysis and stereo vision to determine the distance to objects (including external defects of railway tracks) and the speed of moving objects-obstacles is proposed. To estimate the deviation of the distance depending on the blur a statistical approach, logarithmic, exponential and linear standard functions are used. The statistical approach includes a method of estimating least squares and the method of least modules. The accuracy of determining the distance to the object, its speed and direction of movement is obtained. The paper develops a method of determining distances to objects by analyzing a series of images and assessment of depth using defocusing using its aggregation with stereoscopic vision. This method is based on a physical effect of dependence on the determined distance to the object on the obtained image from the focal length or aperture of the lens. In the calculation of the blur spot diameter it is assumed that blur occurs at the point equally in all directions. According to the proposed approach, it is possible to determine the distance to the studied object and its blur by analyzing a series of images obtained using the video detector with different settings. The article proposes and scientifically substantiates new and improved existing methods for detecting the parameters of static and moving objects of control, and also compares the results of the use of various methods and the results of experiments. It is shown that the aggregate method gives the best approximation to the real distances.

  7. Vision-based control for flight relative to dynamic environments

    NASA Astrophysics Data System (ADS)

    Causey, Ryan Scott

    The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.

  8. Severe Weather Guide - Mediterranean Ports. 7. Marseille

    DTIC Science & Technology

    1988-03-01

    the afternoon. Upper—level westerlies and the associated storm track is moved northward during summer, so extratropical cyclones and associated...autumn as the extratropical storm track moves southward. Precipitation amount is the highest of the year, with an average of 3 inches (76 mm) for the...18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) Storm haven Mediterranean meteorology Marseille port

  9. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  10. An interactive VR system based on full-body tracking and gesture recognition

    NASA Astrophysics Data System (ADS)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  11. New method for finding multiple meaningful trajectories

    NASA Astrophysics Data System (ADS)

    Bao, Zhonghao; Flachs, Gerald M.; Jordan, Jay B.

    1995-07-01

    Mathematical foundations and algorithms for efficiently finding multiple meaningful trajectories (FMMT) in a sequence of digital images are presented. A meaningful trajectory is motion created by a sentient being or by a device under the control of a sentient being. It is smooth and predictable over short time intervals. A meaningful trajectory can suddenly appear or disappear in sequence images. The development of the FMMT is based on these assumptions. A finite state machine in the FMMT is used to model the trajectories under the conditions of occlusions and false targets. Each possible trajectory is associated with an initial state of a finite state machine. When two frames of data are available, a linear predictor is used to predict the locations of all possible trajectories. All trajectories within a certain error bound are moved to a monitoring trajectory state. When trajectories attain three consecutive good predictions, they are moved to a valid trajectory state and considered to be locked into a tracking mode. If an object is occluded while in the valid trajectory state, the predicted position is used to continue to track; however, the confidence in the trajectory is lowered. If the trajectory confidence falls below a lower limit, the trajectory is terminated. Results are presented that illustrate the FMMT applied to track multiple munitions fired from a missile in a sequence of images. Accurate trajectories are determined even in poor images where the probabilities of miss and false alarm are very high.

  12. Signal and array processing techniques for RFID readers

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Amin, Moeness; Zhang, Yimin

    2006-05-01

    Radio Frequency Identification (RFID) has recently attracted much attention in both the technical and business communities. It has found wide applications in, for example, toll collection, supply-chain management, access control, localization tracking, real-time monitoring, and object identification. Situations may arise where the movement directions of the tagged RFID items through a portal is of interest and must be determined. Doppler estimation may prove complicated or impractical to perform by RFID readers. Several alternative approaches, including the use of an array of sensors with arbitrary geometry, can be applied. In this paper, we consider direction-of-arrival (DOA) estimation techniques for application to near-field narrowband RFID problems. Particularly, we examine the use of a pair of RFID antennas to track moving RFID tagged items through a portal. With two antennas, the near-field DOA estimation problem can be simplified to a far-field problem, yielding a simple way for identifying the direction of the tag movement, where only one parameter, the angle, needs to be considered. In this case, tracking of the moving direction of the tag simply amounts to computing the spatial cross-correlation between the data samples received at the two antennas. It is pointed out that the radiation patterns of the reader and tag antennas, particularly their phase characteristics, have a significant effect on the performance of DOA estimation. Indoor experiments are conducted in the Radar Imaging and RFID Labs at Villanova University for validating the proposed technique for target movement direction estimations.

  13. [A tracking function of human eye in microgravity and during readaptation to earth's gravity].

    PubMed

    Kornilova, L N

    2001-01-01

    The paper summarizes results of electro-oculography of all ways of visual tracking: fixative eye movements (saccades), smooth pursuit of linearly, pendulum-like and circularly moving point stimuli, pursuit of vertically moving foveoretinal optokinetic stimuli, and presents values of thresholds and amplification coefficients of the optokinetic nystagmus during tracking of linear movement of foveoretinal optokinetic stimuli. Investigations were performed aboard the Salyut and Mir space stations with participation of 31 cosmonauts of whom 27 made long-term (76 up to 438 day) and 4 made short-term (7 to 9 day) missions. It was shown that in space flight the saccadic structure within the tracking reaction does not change; yet, corrective movements (additional microsaccades to achieve tracking) appeared in 47% of observations at the onset and in 76% of observations on months 3 to 6 of space flight. After landing, the structure of vertical saccades was found altered in half the cosmonauts. No matter in or after flight, reverse nystagmus was present along with the gaze nystagmus during static saccades in 22% (7 cosmonauts) of the observations. Amplitude of tracking vertically, diagonally or circularly moving stimuli was significantly reduced as period on mission increased. Early in flight (40% of the cosmonauts) and shortly afterwards (21% of the cosmonauts) the structure of smooth tracking reaction was totally broken up, that is eye followed stimulus with micro- or macrosaccades. The structure of smooth eye tracking recovered on flight days 6-8 and on postflight days 3-4. However, in 46% of the cosmonauts on long-term missions the structure of smooth eye tracking was noted to be disturbed periodically, i.e. smooth tracking was replaced by saccadic.

  14. Robust human detection, tracking, and recognition in crowded urban areas

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.

  15. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    PubMed

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  16. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    PubMed Central

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  17. Landfalling characteristics of the tropical cyclones generated in the South China Sea

    NASA Astrophysics Data System (ADS)

    Yang, L.; Wang, D.

    2012-12-01

    Tracks of tropical cyclones (TCs) in the South China Sea (SCS) during 1970-2010 can mainly be divided into two categories: Westward (including west and northwest) and Eastward (east and northeast). TCs moving westward tend to make landfall along the South china or Vietnam coast, while those moving eastward tend to dissipate in the ocean or make landfall on Taiwan, Philippine Islands or occasionally the South China coast. During spring (April-May), there are 17 TCs generated in the SCS, among which 13 moves eastward, but only 4 moves westward. A total of 95 TCs forms in the SCS during TC peak season (June-September), among which 71 TCs move westward, about three times more than that moving eastward (24). During October-December, 33 TCs move westward and 12 eastward. The variability of TC track direction is investigated on intraseasonal, seasonal and inter-annual scale circulation. It is found that TC landfall activities are related to Madden-Julian Oscillation (MJO), El Nino-Southern Oscillation (ENSO), monsoon activities and TC genesis locations.

  18. Tracking integration in concentrating photovoltaics using laterally moving optics.

    PubMed

    Duerr, Fabian; Meuret, Youri; Thienpont, Hugo

    2011-05-09

    In this work the concept of tracking-integrated concentrating photovoltaics is studied and its capabilities are quantitatively analyzed. The design strategy desists from ideal concentration performance to reduce the external mechanical solar tracking effort in favor of a compact installation, possibly resulting in lower overall cost. The proposed optical design is based on an extended Simultaneous Multiple Surface (SMS) algorithm and uses two laterally moving plano-convex lenses to achieve high concentration over a wide angular range of ±24°. It achieves 500× concentration, outperforming its conventional concentrating photovoltaic counterparts on a polar aligned single axis tracker.

  19. SU-E-J-197: Investigation of Microsoft Kinect 2.0 Depth Resolution for Patient Motion Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silverstein, E; Snyder, M

    2015-06-15

    Purpose: Investigate the use of the Kinect 2.0 for patient motion tracking during radiotherapy by studying spatial and depth resolution capabilities. Methods: Using code written in C#, depth map data was abstracted from the Kinect to create an initial depth map template indicative of the initial position of an object to be compared to the depth map of the object over time. To test this process, simple setup was created in which two objects were imaged: a 40 cm × 40 cm board covered in non reflective material and a 15 cm × 26 cm textbook with a slightly reflective,more » glossy cover. Each object, imaged and measured separately, was placed on a movable platform with object to camera distance measured. The object was then moved a specified amount to ascertain whether the Kinect’s depth camera would visualize the difference in position of the object. Results: Initial investigations have shown the Kinect depth resolution is dependent on the object to camera distance. Measurements indicate that movements as small as 1 mm can be visualized for objects as close as 50 cm away. This depth resolution decreases linearly with object to camera distance. At 4 m, the depth resolution had decreased to observe a minimum movement of 1 cm. Conclusion: The improved resolution and advanced hardware of the Kinect 2.0 allows for increase of depth resolution over the Kinect 1.0. Although obvious that the depth resolution should decrease with increasing distance from an object given the decrease in number of pixels representing said object, the depth resolution at large distances indicates its usefulness in a clinical setting.« less

  20. Enhanced compressed sensing for visual target tracking in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Qiang, Guo

    2017-11-01

    Moving object tracking in wireless sensor networks (WSNs) has been widely applied in various fields. Designing low-power WSNs for the limited resources of the sensor, such as energy limitation, energy restriction, and bandwidth constraints, is of high priority. However, most existing works focus on only single conflicting optimization criteria. An efficient compressive sensing technique based on a customized memory gradient pursuit algorithm with early termination in WSNs is presented, which strikes compelling trade-offs among energy dissipation for wireless transmission, certain types of bandwidth, and minimum storage. Then, the proposed approach adopts an unscented particle filter to predict the location of the target. The experimental results with a theoretical analysis demonstrate the substantially superior effectiveness of the proposed model and framework in regard to the energy and speed under the resource limitation of a visual sensor node.

  1. Adaptive and accelerated tracking-learning-detection

    NASA Astrophysics Data System (ADS)

    Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu

    2013-08-01

    An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.

  2. Human tracking in thermal images using adaptive particle filters with online random forest learning

    NASA Astrophysics Data System (ADS)

    Ko, Byoung Chul; Kwak, Joon-Young; Nam, Jae-Yeal

    2013-11-01

    This paper presents a fast and robust human tracking method to use in a moving long-wave infrared thermal camera under poor illumination with the existence of shadows and cluttered backgrounds. To improve the human tracking performance while minimizing the computation time, this study proposes an online learning of classifiers based on particle filters and combination of a local intensity distribution (LID) with oriented center-symmetric local binary patterns (OCS-LBP). Specifically, we design a real-time random forest (RF), which is the ensemble of decision trees for confidence estimation, and confidences of the RF are converted into a likelihood function of the target state. First, the target model is selected by the user and particles are sampled. Then, RFs are generated using the positive and negative examples with LID and OCS-LBP features by online learning. The learned RF classifiers are used to detect the most likely target position in the subsequent frame in the next stage. Then, the RFs are learned again by means of fast retraining with the tracked object and background appearance in the new frame. The proposed algorithm is successfully applied to various thermal videos as tests and its tracking performance is better than those of other methods.

  3. Severe Weather Guide - Mediterranean Ports. 4. Augusta Bay

    DTIC Science & Technology

    1988-03-01

    the year. The track o-f strong extratropical storms has moved northward and poses little tiireat to Augusta Bay. Sea breezes are daily occurrences...as temperatures, begin to moderate. Extratropi cal systems begin to transit Europe as the storm track moves southward in advance of the winter...SUB-GROUP 18. SUBJECT TERMS {Continue on reverse if necessary and identify by block number) Storm haven Mediterranean meteorology Augusta Bay

  4. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.

    PubMed

    Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon

    2009-01-01

    Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.

  5. Partial camera automation in an unmanned air vehicle.

    PubMed

    Korteling, J E; van der Borg, W

    1997-03-01

    The present study focused on an intelligent, semiautonomous, interface for a camera operator of a simulated unmanned air vehicle (UAV). This interface used system "knowledge" concerning UAV motion in order to assist a camera operator in tracking an object moving through the landscape below. The semiautomated system compensated for the translations of the UAV relative to the earth. This compensation was accompanied by the appropriate joystick movements ensuring tactile (haptic) feedback of these system interventions. The operator had to superimpose self-initiated joystick manipulations over these system-initiated joystick motions in order to track the motion of a target (a driving truck) relative to the terrain. Tracking data showed that subjects performed substantially better with the active system. Apparently, the subjects had no difficulty in maintaining control, i.e., "following" the active stick while superimposing self-initiated control movements over the system-interventions. Furthermore, tracking performance with an active interface was clearly superior relative to the passive system. The magnitude of this effect was equal to the effect of update-frequency (2-5 Hz) of the monitor image. The benefits of update frequency enhancement and semiautomated tracking were the greatest under difficult steering conditions. Mental workload scores indicated that, for the difficult tracking-dynamics condition, both semiautomation and update frequency increase resulted in less experienced mental effort. For the easier dynamics this effect was only seen for update frequency.

  6. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.

  7. Certainty grids for mobile robots

    NASA Technical Reports Server (NTRS)

    Moravec, H. P.

    1987-01-01

    A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.

  8. Development of real-time extensometer based on image processing

    NASA Astrophysics Data System (ADS)

    Adinanta, H.; Puranto, P.; Suryadi

    2017-04-01

    An extensometer system was developed by using high definition web camera as main sensor to track object position. The developed system applied digital image processing techniques. The image processing was used to measure the change of object position. The position measurement was done in real-time so that the system can directly showed the actual position in both x and y-axis. In this research, the relation between pixel and object position changes had been characterized. The system was tested by moving the target in a range of 20 cm in interval of 1 mm. To verify the long run performance, the stability and linearity of continuous measurements on both x and y-axis, this measurement had been conducted for 83 hours. The results show that this image processing-based extensometer had both good stability and linearity.

  9. Real time automated inspection

    DOEpatents

    Fant, Karl M.; Fundakowski, Richard A.; Levitt, Tod S.; Overland, John E.; Suresh, Bindinganavle R.; Ulrich, Franz W.

    1985-01-01

    A method and apparatus relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges are segmented out by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections.

  10. Dynamic optimization of ISR sensors using a risk-based reward function applied to ground and space surveillance scenarios

    NASA Astrophysics Data System (ADS)

    DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.

    2012-06-01

    As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented. The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object characterization, jointly, in computing reward and cost for optimizing tasking decisions.

  11. Imaging artificial satellites: An observational challenge

    NASA Astrophysics Data System (ADS)

    Smith, D. A.; Hill, D. C.

    2016-10-01

    According to the Union of Concerned Scientists, as of the beginning of 2016 there are 1381 active satellites orbiting the Earth, and the United States' Space Surveillance Network tracks about 8000 manmade orbiting objects of baseball-size and larger. NASA estimates debris larger than 1 cm to number more than half a million. The largest ones can be seen by eye—unresolved dots of light that move across the sky in minutes. For most astrophotographers, satellites are annoying streaks that can ruin hours of work. However, capturing a resolved image of an artificial satellite can pose an interesting challenge for a student, and such a project can provide connections between objects in the sky and commercial and political activities here on Earth.

  12. Detection of unknown targets from aerial camera and extraction of simple object fingerprints for the purpose of target reacquisition

    NASA Astrophysics Data System (ADS)

    Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri

    2012-01-01

    An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.

  13. Teaching Braille Line Tracking Using Stimulus Fading

    ERIC Educational Resources Information Center

    Scheithauer, Mindy C.; Tiger, Jeffrey H.

    2014-01-01

    Line tracking is a prerequisite skill for braille literacy that involves moving one's finger horizontally across a line of braille text and identifying when a line ends so the reader may reset his or her finger on the subsequent line. Current procedures for teaching line tracking are incomplete, because they focus on tracking lines with only…

  14. Development of a two photon microscope for tracking Drosophila larvae

    NASA Astrophysics Data System (ADS)

    Karagyozov, Doycho; Mihovilovic Skanata, Mirna; Gershow, Marc

    Current in vivo methods for measuring neural activity in Drosophila larva require immobilization of the animal. Although we can record neural signals while stimulating the sensory organs, we cannot read the behavioral output because we have prevented the animal from moving. Many research questions cannot be answered without observation of neural activity in behaving (freely-moving) animals. We incorporated a Tunable Acoustic Gradient (TAG) lens into a two-photon microscope to achieve a 70kHz axial scan rate, enabling volumetric imaging at tens of hertz. We then implemented a tracking algorithm based on a Kalman filter to maintain the neurons of interest in the field of view and in focus during the rapid three dimensional motion of a free larva. Preliminary results show successful tracking of a neuron moving at speeds reaching 500 μm/s. NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  15. A Video Game Platform for Exploring Satellite and In-Situ Data Streams

    NASA Astrophysics Data System (ADS)

    Cai, Y.

    2014-12-01

    Exploring spatiotemporal patterns of moving objects are essential to Earth Observation missions, such as tracking, modeling and predicting movement of clouds, dust, plumes and harmful algal blooms. Those missions involve high-volume, multi-source, and multi-modal imagery data analysis. Analytical models intend to reveal inner structure, dynamics, and relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive but limited by manual operations, such as area marking, measurement and alignment of multi-source data, which are expensive and time-consuming. A new development of video analytics platform has been in progress, which integrates the video game engine with satellite and in-situ data streams. The system converts Earth Observation data into articulated objects that are mapped from a high-dimensional space to a 3D space. The object tracking and augmented reality algorithms highlight the objects' features in colors, shapes and trajectories, creating visual cues for observing dynamic patterns. The head and gesture tracker enable users to navigate the data space interactively. To validate our design, we have used NASA SeaWiFS satellite images of oceanographic remote sensing data and NOAA's in-situ cell count data. Our study demonstrates that the video game system can reduce the size and cost of traditional CAVE systems in two to three orders of magnitude. This system can also be used for satellite mission planning and public outreaching.

  16. Object Locating System

    NASA Technical Reports Server (NTRS)

    Arndt, G. Dickey (Inventor); Carl, James R. (Inventor)

    2000-01-01

    A portable system is provided that is operational for determining, with three dimensional resolution, the position of a buried object or approximately positioned object that may move in space or air or gas. The system has a plurality of receivers for detecting the signal front a target antenna and measuring the phase thereof with respect to a reference signal. The relative permittivity and conductivity of the medium in which the object is located is used along with the measured phase signal to determine a distance between the object and each of the plurality of receivers. Knowing these distances. an iteration technique is provided for solving equations simultaneously to provide position coordinates. The system may also be used for tracking movement of an object within close range of the system by sampling and recording subsequent position of the object. A dipole target antenna. when positioned adjacent to a buried object, may be energized using a separate transmitter which couples energy to the target antenna through the medium. The target antenna then preferably resonates at a different frequency, such as a second harmonic of the transmitter frequency.

  17. Integration of World Knowledge and Temporary Information about Changes in an Object's Environmental Location during Different Stages of Sentence Comprehension

    PubMed Central

    Chen, Xuqian; Yang, Wei; Ma, Lijun; Li, Jiaxin

    2018-01-01

    Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in “The writer picked up the pen from the floor and moved it to the desk,” the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a “look-and-listen” task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension. PMID:29520249

  18. Integration of World Knowledge and Temporary Information about Changes in an Object's Environmental Location during Different Stages of Sentence Comprehension.

    PubMed

    Chen, Xuqian; Yang, Wei; Ma, Lijun; Li, Jiaxin

    2018-01-01

    Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in "The writer picked up the pen from the floor and moved it to the desk," the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a "look-and-listen" task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension.

  19. Predicting the sinkage of a moving tracked mining vehicle using a new rheological formulation for soft deep-sea sediment

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Rao, Qiuhua; Ma, Wenbo

    2018-03-01

    The sinkage of a moving tracked mining vehicle is greatly affected by the combined compression-shear rheological properties of soft deep-sea sediments. For test purposes, the best sediment simulant is prepared based on soft deep-sea sediment from a C-C poly-metallic nodule mining area in the Pacific Ocean. Compressive creep tests and shear creep tests are combined to obtain compressive and shear rheological parameters to establish a combined compressive-shear rheological constitutive model and a compression-sinkage rheological constitutive model. The combined compression-shear rheological sinkage of the tracked mining vehicle at different speeds is calculated using the RecurDyn software with a selfprogrammed subroutine to implement the combined compression-shear rheological constitutive model. The model results are compared with shear rheological sinkage and ordinary sinkage (without consideration of rheological properties). These results show that the combined compression-shear rheological constitutive model must be taken into account when calculating the sinkage of a tracked mining vehicle. The combined compression-shear rheological sinkage decrease with vehicle speed and is the largest among the three types of sinkage. The developed subroutine in the RecurDyn software can be used to study the performance and structural optimization of moving tracked mining vehicles.

  20. Moving target tracking through distributed clustering in directional sensor networks.

    PubMed

    Enayet, Asma; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif

    2014-12-18

    The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works.

  1. Moving Target Tracking through Distributed Clustering in Directional Sensor Networks

    PubMed Central

    Enayet, Asma; Razzaque, Md. Abdur; Hassan, Mohammad Mehedi; Almogren, Ahmad; Alamri, Atif

    2014-01-01

    The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works. PMID:25529205

  2. Detection of multiple airborne targets from multisensor data

    NASA Astrophysics Data System (ADS)

    Foltz, Mark A.; Srivastava, Anuj; Miller, Michael I.; Grenander, Ulf

    1995-08-01

    Previously we presented a jump-diffusion based random sampling algorithm for generating conditional mean estimates of scene representations for the tracking and recongition of maneuvering airborne targets. These representations include target positions and orientations along their trajectories and the target type associated with each trajectory. Taking a Bayesian approach, a posterior measure is defined on the parameter space by combining sensor models with a sophisticated prior based on nonlinear airplane dynamics. The jump-diffusion algorithm constructs a Markov process which visits the elements of the parameter space with frequencies proportional to the posterior probability. It consititutes both the infinitesimal, local search via a sample path continuous diffusion transform and the larger, global steps through discrete jump moves. The jump moves involve the addition and deletion of elements from the scene configuration or changes in the target type assoviated with each target trajectory. One such move results in target detection by the addition of a track seed to the inference set. This provides initial track data for the tracking/recognition algorithm to estimate linear graph structures representing tracks using the other jump moves and the diffusion process, as described in our earlier work. Target detection ideally involves a continuous research over a continuum of the observation space. In this work we conclude that for practical implemenations the search space must be discretized with lattice granularity comparable to sensor resolution, and discuss how fast Fourier transforms are utilized for efficient calcuation of sufficient statistics given our array models. Some results are also presented from our implementation on a networked system including a massively parallel machine architecture and a silicon graphics onyx workstation.

  3. Common world model for unmanned systems

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.

    2013-05-01

    The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines. The Common World Model must understand how these objects relate to each other. Our world model includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation approach to the world model. We discuss the design of "Phase 1" of this world model, and interfaces by tracing perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We close with lessons learned from implementation and how the design relates to Open Architecture.

  4. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2017-01-01

    Any exploration vehicle assembled or Spacecraft placed in LEO or GTO must pass through this debris cloud and survive. Large cross section, low thrust vehicles will spend more time spiraling out through the cloud and will suffer more impacts.Better knowledge of small debris will improve survival odds. Current estimated Density of debris at various orbital attitudes with notation of recent collisions and resulting spikes. Orbital Debris Tracking and Characterization has now been added to NASA Office of Chief Technologists Technology Development Roadmap in Technology Area 5 (TA5.7)[Orbital Debris Tracking and Characterization] and is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crews due to the risk of Orbital Debris damage to ISS Exploration vehicles. The Problem: Traditional orbital trackers looking for small, dim orbital derelicts and debris typically will stare at the stars and let any reflected light off the debris integrate in the imager for seconds, thus creating a streak across the image. The Solution: The Small Tracker will see Stars and other celestial objects rise through its Field of View (FOV) at the rotational rate of its orbit, but the glint off of orbital objects will move through the FOV at different rates and directions. Debris on a head-on collision course (or close) will stay in the FOV at 14 Km per sec. The Small Tracker can track at 60 frames per sec allowing up to 30 fixes before a near-miss pass. A Stereo pair of Small Trackers can provide range data within 5-7 Km for better orbit measurements.

  5. Evaluating De-centralised and Distributional Options for the Distributed Electronic Warfare Situation Awareness and Response Test Bed

    DTIC Science & Technology

    2013-12-01

    effectors (deployed on ground based or aerial platforms) to detect , identify, locate, track or suppress stationary or slow moving surface based RF...ground based or aerial platforms) to detect , identify, locate, track or suppress stationary or slow moving surface based RF emitting targets. In the...Electronic Support EO Electro-Optic FPGAs Field Programmable Gate Arrays IR Infra-red LADAR Laser Detection and Ranging OSX Mac OS X; the apple

  6. Optical Indoor Positioning System Based on TFT Technology.

    PubMed

    Gőzse, István

    2015-12-24

    A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low.

  7. Investigating the Mobility of Light Autonomous Tracked Vehicles using a High Performance Computing Simulation Capability

    NASA Technical Reports Server (NTRS)

    Negrut, Dan; Mazhar, Hammad; Melanz, Daniel; Lamb, David; Jayakumar, Paramsothy; Letherwood, Michael; Jain, Abhinandan; Quadrelli, Marco

    2012-01-01

    This paper is concerned with the physics-based simulation of light tracked vehicles operating on rough deformable terrain. The focus is on small autonomous vehicles, which weigh less than 100 lb and move on deformable and rough terrain that is feature rich and no longer representable using a continuum approach. A scenario of interest is, for instance, the simulation of a reconnaissance mission for a high mobility lightweight robot where objects such as a boulder or a ditch that could otherwise be considered small for a truck or tank, become major obstacles that can impede the mobility of the light autonomous vehicle and negatively impact the success of its mission. Analyzing and gauging the mobility and performance of these light vehicles is accomplished through a modeling and simulation capability called Chrono::Engine. Chrono::Engine relies on parallel execution on Graphics Processing Unit (GPU) cards.

  8. ALLFlight: detection of moving objects in IR and ladar images

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven

    2013-05-01

    Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.

  9. CT brush and CancerZap!: two video games for computed tomography dose minimization.

    PubMed

    Alvare, Graham; Gordon, Richard

    2015-05-12

    X-ray dose from computed tomography (CT) scanners has become a significant public health concern. All CT scanners spray x-ray photons across a patient, including those using compressive sensing algorithms. New technologies make it possible to aim x-ray beams where they are most needed to form a diagnostic or screening image. We have designed a computer game, CT Brush, that takes advantage of this new flexibility. It uses a standard MART algorithm (Multiplicative Algebraic Reconstruction Technique), but with a user defined dynamically selected subset of the rays. The image appears as the player moves the CT brush over an initially blank scene, with dose accumulating with every "mouse down" move. The goal is to find the "tumor" with as few moves (least dose) as possible. We have successfully implemented CT Brush in Java and made it available publicly, requesting crowdsourced feedback on improving the open source code. With this experience, we also outline a "shoot 'em up game" CancerZap! for photon limited CT. We anticipate that human computing games like these, analyzed by methods similar to those used to understand eye tracking, will lead to new object dependent CT algorithms that will require significantly less dose than object independent nonlinear and compressive sensing algorithms that depend on sprayed photons. Preliminary results suggest substantial dose reduction is achievable.

  10. Learning the trajectory of a moving visual target and evolution of its tracking in the monkey

    PubMed Central

    Bourrelly, Clara; Quinet, Julie; Cavanagh, Patrick

    2016-01-01

    An object moving in the visual field triggers a saccade that brings its image onto the fovea. It is followed by a combination of slow eye movements and catch-up saccades that try to keep the target image on the fovea as long as possible. The accuracy of this ability to track the “here-and-now” location of a visual target contrasts with the spatiotemporally distributed nature of its encoding in the brain. We show in six experimentally naive monkeys how this performance is acquired and gradually evolves during successive daily sessions. During the early exposure, the tracking is mostly saltatory, made of relatively large saccades separated by low eye velocity episodes, demonstrating that accurate (here and now) pursuit is not spontaneous and that gaze direction lags behind its location most of the time. Over the sessions, while the pursuit velocity is enhanced, the gaze is more frequently directed toward the current target location as a consequence of a 25% reduction in the number of catch-up saccades and a 37% reduction in size (for the first saccade). This smoothing is observed at several scales: during the course of single trials, across the set of trials within a session, and over successive sessions. We explain the neurophysiological processes responsible for this combined evolution of saccades and pursuit in the absence of stringent training constraints. More generally, our study shows that the oculomotor system can be used to discover the neural mechanisms underlying the ability to synchronize a motor effector with a dynamic external event. PMID:27683886

  11. Massive photometry of low-altitude artificial satellites on Mini-Mega-TORTORA

    NASA Astrophysics Data System (ADS)

    Karpov, S.; Katkova, E.; Beskin, G.; Biryukov, A.; Bondar, S.; Davydov, E.; Ivanov, E.; Perkov, A.; Sasyuk, V.

    2016-12-01

    The nine-channel Mini-Mega-TORTORA (MMT-9) optical wide-field monitoring system with high temporal resolution system is in operation since June 2014. The system has 0.1 s temporal resolution and effective detection limit around 10 mag (calibrated to V filter) for fast-moving objects on this timescale. In addition to its primary scientific operation, the system detects 200-500 tracks of satellites every night, both on low-altitude and high ellipticity orbits. Using these data we created and support the public database of photometric characteristics for these satellites, available online.

  12. Microgravity

    NASA Image and Video Library

    2003-01-22

    One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.

  13. Understanding Visible Perception

    NASA Technical Reports Server (NTRS)

    2003-01-01

    One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.

  14. Covert enaction at work: Recording the continuous movements of visuospatial attention to visible or imagined targets by means of Steady-State Visual Evoked Potentials (SSVEPs).

    PubMed

    Gregori Grgič, Regina; Calore, Enrico; de'Sperati, Claudio

    2016-01-01

    Whereas overt visuospatial attention is customarily measured with eye tracking, covert attention is assessed by various methods. Here we exploited Steady-State Visual Evoked Potentials (SSVEPs) - the oscillatory responses of the visual cortex to incoming flickering stimuli - to record the movements of covert visuospatial attention in a way operatively similar to eye tracking (attention tracking), which allowed us to compare motion observation and motion extrapolation with and without eye movements. Observers fixated a central dot and covertly tracked a target oscillating horizontally and sinusoidally. In the background, the left and the right halves of the screen flickered at two different frequencies, generating two SSVEPs in occipital regions whose size varied reciprocally as observers attended to the moving target. The two signals were combined into a single quantity that was modulated at the target frequency in a quasi-sinusoidal way, often clearly visible in single trials. The modulation continued almost unchanged when the target was switched off and observers mentally extrapolated its motion in imagery, and also when observers pointed their finger at the moving target during covert tracking, or imagined doing so. The amplitude of modulation during covert tracking was ∼25-30% of that measured when observers followed the target with their eyes. We used 4 electrodes in parieto-occipital areas, but similar results were achieved with a single electrode in Oz. In a second experiment we tested ramp and step motion. During overt tracking, SSVEPs were remarkably accurate, showing both saccadic-like and smooth pursuit-like modulations of cortical responsiveness, although during covert tracking the modulation deteriorated. Covert tracking was better with sinusoidal motion than ramp motion, and better with moving targets than stationary ones. The clear modulation of cortical responsiveness recorded during both overt and covert tracking, identical for motion observation and motion extrapolation, suggests to include covert attention movements in enactive theories of mental imagery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.

    PubMed

    Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D

    2017-10-01

    This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.

  16. A Motion Tracking and Sensor Fusion Module for Medical Simulation.

    PubMed

    Shen, Yunhe; Wu, Fan; Tseng, Kuo-Shih; Ye, Ding; Raymond, John; Konety, Badrinath; Sweet, Robert

    2016-01-01

    Here we introduce a motion tracking or navigation module for medical simulation systems. Our main contribution is a sensor fusion method for proximity or distance sensors integrated with inertial measurement unit (IMU). Since IMU rotation tracking has been widely studied, we focus on the position or trajectory tracking of the instrument moving freely within a given boundary. In our experiments, we have found that this module reliably tracks instrument motion.

  17. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  18. Real time automated inspection

    DOEpatents

    Fant, K.M.; Fundakowski, R.A.; Levitt, T.S.; Overland, J.E.; Suresh, B.R.; Ulrich, F.W.

    1985-05-21

    A method and apparatus are described relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections. 43 figs.

  19. Optofluidic solar concentrators using electrowetting tracking: Concept, design, and characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, JT; Park, S; Chen, CL

    2013-03-01

    We introduce a novel optofluidic solar concentration system based on electrowetting tracking. With two immiscible fluids in a transparent cell, we can actively control the orientation of fluid fluid interface via electrowetting. The naturally-formed meniscus between the two liquids can function as a dynamic optical prism for solar tracking and sunlight steering. An integrated optofluidic solar concentrator can be constructed from the liquid prism tracker in combination with a fixed and static optical condenser (Fresnel lens). Therefore, the liquid prisms can adaptively focus sunlight on a concentrating photovoltaic (CPV) cell sitting on the focus of the Fresnel lens as themore » sun moves. Because of the unique design, electrowetting tracking allows the concentrator to adaptively track both the daily and seasonal changes of the sun's orbit (dual-axis tracking) without bulky, expensive and inefficient mechanical moving parts. This approach can potentially reduce capital costs for CPV and increases operational efficiency by eliminating the power consumption of mechanical tracking. Importantly, the elimination of bulky tracking hardware and quiet operation will allow extensive residential deployment of concentrated solar power. In comparison with traditional silicon-based photovoltaic (PV) solar cells, the electrowetting-based self-tracking technology will generate,similar to 70% more green energy with a 50% cost reduction. (C) 2013 Elsevier Ltd. All rights reserved.« less

  20. Symplectic analysis of vertical random vibration for coupled vehicle track systems

    NASA Astrophysics Data System (ADS)

    Lu, F.; Kennedy, D.; Williams, F. W.; Lin, J. H.

    2008-10-01

    A computational model for random vibration analysis of vehicle-track systems is proposed and solutions use the pseudo excitation method (PEM) and the symplectic method. The vehicle is modelled as a mass, spring and damping system with 10 degrees of freedom (dofs) which consist of vertical and pitching motion for the vehicle body and its two bogies and vertical motion for the four wheelsets. The track is treated as an infinite Bernoulli-Euler beam connected to sleepers and hence to ballast and is regarded as a periodic structure. Linear springs couple the vehicle and the track. Hence, the coupled vehicle-track system has only 26 dofs. A fixed excitation model is used, i.e. the vehicle does not move along the track but instead the track irregularity profile moves backwards at the vehicle velocity. This irregularity is assumed to be a stationary random process. Random vibration theory is used to obtain the response power spectral densities (PSDs), by using PEM to transform this random multiexcitation problem into a deterministic harmonic excitation one and then applying symplectic solution methodology. Numerical results for an example include verification of the proposed method by comparing with finite element method (FEM) results; comparison between the present model and the traditional rigid track model and; discussion of the influences of track damping and vehicle velocity.

  1. Lateralized electrical brain activity reveals covert attention allocation during speaking.

    PubMed

    Rommers, Joost; Meyer, Antje S; Praamstra, Peter

    2017-01-27

    Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers' eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers' covert attention allocation as they produced short utterances to describe pairs of objects (e.g., "dog and chair"). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200-350ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Magnetic levitation and its application for education devices based on YBCO bulk superconductors

    NASA Astrophysics Data System (ADS)

    Yang, W. M.; Chao, X. X.; Guo, F. X.; Li, J. W.; Chen, S. L.

    2013-10-01

    A small superconducting maglev propeller system, a small spacecraft model suspending and moving around a terrestrial globe, several small maglev vehicle models and a magnetic circuit converter have been designed and constructed. The track was paved by NdFeB magnets, the arrangement of the magnets made us easy to get a uniform distribution of magnetic field along the length direction of the track and a high magnetic field gradient in the lateral direction. When the YBCO bulks mounted inside the vehicle models or spacecraft model was field cooled to LN2 temperature at a certain distance away from the track, they could be automatically floating over and moving along the track without any obvious friction. The models can be used as experimental or demonstration devices for the magnetic levitation applications.

  3. A comparison of moving object detection methods for real-time moving object detection

    NASA Astrophysics Data System (ADS)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  4. 1. SUMMER STREET BRIDGE. DRAW SPAN MOVES TOWARD VIEWER ON ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. SUMMER STREET BRIDGE. DRAW SPAN MOVES TOWARD VIEWER ON TRACKS VISIBLE AT CENTER OF PHOTOGRAPH. - Summer Street Retractile Bridge, Spanning Fort Point Channel at Summer Street, Boston, Suffolk County, MA

  5. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  6. Optical Tracker For Longwall Coal Shearer

    NASA Technical Reports Server (NTRS)

    Poulsen, Peter D.; Stein, Richard J.; Pease, Robert E.

    1989-01-01

    Photographic record yields information for correction of vehicle path. Tracking system records lateral movements of longwall coal-shearing vehicle. System detects lateral and vertical deviations of path of vehicle moving along coal face, shearing coal as it goes. Rides on rails in mine tunnel, advancing on toothed track in one of rails. As vehicle moves, retroreflective mirror rides up and down on teeth, providing series of pulsed reflections to film recorder. Recorded positions of pulses, having horizontal and vertical orientations, indicate vertical and horizontal deviations, respectively, of vehicle.

  7. Superluminal Motion Found In Milky Way

    NASA Astrophysics Data System (ADS)

    1994-08-01

    Researchers using the Very Large Array (VLA) have discovered that a small, powerful object in our own cosmic neighborhood is shooting out material at nearly the speed of light -- a feat previously known to be performed only by the massive cores of entire galaxies. In fact, because of the direction in which the material is moving, it appears to be traveling faster than the speed of light -- a phenomenon called "superluminal motion." This is the first superluminal motion ever detected within our Galaxy. During March and April of this year, Dr. Felix Mirabel of the Astrophysics Section of the Center for Studies at Saclay, France, and Dr. Luis Rodriguez of the Institute of Astronomy at the National Autonomous University in Mexico City and NRAO, observed "a remarkable ejection event" in which the object shot out material in opposite directions at 92 percent of the speed of light, or more than 171,000 miles per second. This event ejected a mass equal to one-third that of the moon with the power of 100 million suns. Such powerful ejections are well known in distant galaxies and quasars, millions and billions of light-years away, but the object Mirabel and Rodriguez observed is within our own Milky Way Galaxy, only 40,000 light-years away. The object also is much smaller and less massive than the core of a galaxy, so the scientists were quite surprised to find it capable of accelerating material to such speeds. Mirabel and Rodriguez believe that the object is likely a double-star system, with one of the stars either an extremely dense neutron star or a black hole. The neutron star or black hole is the central object of the system, with great mass and strong gravitational pull. It is surrounded by a disk of material orbiting closely and being drawn into it. Such a disk is known as an accretion disk. The central object's powerful gravity, they believe, is pulling material from a more-normal companion star into the accretion disk. The central object is emitting jets of subatomic particles from its poles, and it is in these jets that the rapidly-moving material was tracked. The object, known as GRS 1915+105, also is a strong emitter of X-Rays, sometimes becoming the strongest source of X-Rays in the Milky Way. The X-rays, they think, are emitted from the system's accretion disk. The VLA observations, along with other evidence the researchers have uncovered, leads them to believe that, despite being much less massive than galactic cores, other double-star systems may be capable of ejecting material at speeds near that of light. The researchers reported their discovery in the September 1 issue of the journal Nature. "This discovery is one of the most valuable results of more than a decade and a half of observations at the VLA," said Dr. Miller Goss, assistant director of NRAO for VLA/VLBA operations. "We see these fast-moving jets of material throughout the universe, and they represent an important physical process. However, they're usually so far away that it's difficult to study them. This object, relatively nearby, offers the best opportunity yet to build a good understanding of how such jets actually work," Goss added. GRS 1915+105 was discovered in 1992 by an orbiting French- Russian X-ray observatory called SIGMA-GRANAT. It had not been found before because its X-rays are highly-energetic "hard" X-rays not regularly observed by satellites before then. Since its discovery, it has repeatedly been seen as a source of "hard" X- rays. Despite searching, the scientists have been unable to observe the object in visible light. Observations with the VLA in 1992 and 1993 showed that the object changed both its radio "brightness" and its apparent position in the sky, but it was then too faint at radio wavelengths for precise measurements. In March of 1994, the object began an outburst of strong radio emission just as the VLA had entered a configuration capable of its most precise positional measurements. Through March and April of 1994, Mirabel and Rodriguez were able to track the movement of the two condensations in the jets of material moving away from the object's core. They found that the core remained stationary, while the approaching condensation was apparently moving at 125 percent of the speed of light. After correcting for relativistic effects, they conclude that the ejected material actually is moving at 92 percent of light speed. Their calculations indicate that the pair of "blobs" they tracked were ejected from the core on March 19, during a period when the object was emitting more X-rays than usual. GRS 1915+105 somewhat resembles a famous astronomical object that was intensively studied in the late 1970s and early 1980s, called SS433. The VLA was used for many observations of SS433, which, astronomers believe, is also a double-star system with a dense, massive star as its centerpiece. SS433 has jets similar to those of GRS 1915+105, but the fastest motions detected in SS433's jets are only 26 percent the speed of light. Comparing it to quasars, which are believed to be phenomena associated with supermassive black holes at the centers of galaxies -- objects much larger and more massive than stars -- astronomers have called SS433 a "stellar microquasar." With kinetic energies 40 times those of SS433, GRS 1915+105 "appears to be a scaled up version" of the other object, Mirabel and Rodriguez say.

  8. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  9. Electromagnetic guided couch and multileaf collimator tracking on a TrueBeam accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Rune; Ravkilde, Thomas; Worm, Esben Schjødt

    2016-05-15

    Purpose: Couch and MLC tracking are two promising methods for real-time motion compensation during radiation therapy. So far, couch and MLC tracking experiments have mainly been performed by different research groups, and no direct comparison of couch and MLC tracking of volumetric modulated arc therapy (VMAT) plans has been published. The Varian TrueBeam 2.0 accelerator includes a prototype tracking system with selectable couch or MLC compensation. This study provides a direct comparison of the two tracking types with an otherwise identical setup. Methods: Several experiments were performed to characterize the geometric and dosimetric performance of electromagnetic guided couch and MLCmore » tracking on a TrueBeam accelerator equipped with a Millennium MLC. The tracking system latency was determined without motion prediction as the time lag between sinusoidal target motion and the compensating motion of the couch or MLC as recorded by continuous MV portal imaging. The geometric and dosimetric tracking accuracies were measured in tracking experiments with motion phantoms that reproduced four prostate and four lung tumor trajectories. The geometric tracking error in beam’s eye view was determined as the distance between an embedded gold marker and a circular MLC aperture in continuous MV images. The dosimetric tracking error was quantified as the measured 2%/2 mm gamma failure rate of a low and a high modulation VMAT plan delivered with the eight motion trajectories using a static dose distribution as reference. Results: The MLC tracking latency was approximately 146 ms for all sinusoidal period lengths while the couch tracking latency increased from 187 to 246 ms with decreasing period length due to limitations in the couch acceleration. The mean root-mean-square geometric error was 0.80 mm (couch tracking), 0.52 mm (MLC tracking), and 2.75 mm (no tracking) parallel to the MLC leaves and 0.66 mm (couch), 1.14 mm (MLC), and 2.41 mm (no tracking) perpendicular to the leaves. The motion-induced gamma failure rate was in mean 0.1% (couch tracking), 8.1% (MLC tracking), and 30.4% (no tracking) for prostate motion and 2.9% (couch), 2.4% (MLC), and 41.2% (no tracking) for lung tumor motion. The residual tracking errors were mainly caused by inadequate adaptation to fast lung tumor motion for couch tracking and to prostate motion perpendicular to the MLC leaves for MLC tracking. Conclusions: Couch and MLC tracking markedly improved the geometric and dosimetric accuracies of VMAT delivery. However, the two tracking types have different strengths and weaknesses. While couch tracking can correct perfectly for slowly moving targets such as the prostate, MLC tracking may have considerably larger dose errors for persistent target shift perpendicular to the MLC leaves. Advantages of MLC tracking include faster dynamics with better adaptation to fast moving targets, the avoidance of moving the patient, and the potential to track target rotations and deformations.« less

  10. Using Educational Technology to Help Students Get Back on Track

    ERIC Educational Resources Information Center

    Bertrand, Clare

    2013-01-01

    Increasingly, school districts, schools, and their partners are incorporating technology into strategies that help engage young people who have fallen off track to on-time graduation get back on track and move into effective educational pathways. This is especially true in light of the continuing pressure to raise high school graduation rates and…

  11. Video stimuli reduce object-directed imitation accuracy: a novel two-person motion-tracking approach.

    PubMed

    Reader, Arran T; Holmes, Nicholas P

    2015-01-01

    Imitation is an important form of social behavior, and research has aimed to discover and explain the neural and kinematic aspects of imitation. However, much of this research has featured single participants imitating in response to pre-recorded video stimuli. This is in spite of findings that show reduced neural activation to video vs. real life movement stimuli, particularly in the motor cortex. We investigated the degree to which video stimuli may affect the imitation process using a novel motion tracking paradigm with high spatial and temporal resolution. We recorded 14 positions on the hands, arms, and heads of two individuals in an imitation experiment. One individual freely moved within given parameters (moving balls across a series of pegs) and a second participant imitated. This task was performed with either simple (one ball) or complex (three balls) movement difficulty, and either face-to-face or via a live video projection. After an exploratory analysis, three dependent variables were chosen for examination: 3D grip position, joint angles in the arm, and grip aperture. A cross-correlation and multivariate analysis revealed that object-directed imitation task accuracy (as represented by grip position) was reduced in video compared to face-to-face feedback, and in complex compared to simple difficulty. This was most prevalent in the left-right and forward-back motions, relevant to the imitator sitting face-to-face with the actor or with a live projected video of the same actor. The results suggest that for tasks which require object-directed imitation, video stimuli may not be an ecologically valid way to present task materials. However, no similar effects were found in the joint angle and grip aperture variables, suggesting that there are limits to the influence of video stimuli on imitation. The implications of these results are discussed with regards to previous findings, and with suggestions for future experimentation.

  12. A review of vision-based motion analysis in sport.

    PubMed

    Barris, Sian; Button, Chris

    2008-01-01

    Efforts at player motion tracking have traditionally involved a range of data collection techniques from live observation to post-event video analysis where player movement patterns are manually recorded and categorized to determine performance effectiveness. Due to the considerable time required to manually collect and analyse such data, research has tended to focus only on small numbers of players within predefined playing areas. Whilst notational analysis is a convenient, practical and typically inexpensive technique, the validity and reliability of the process can vary depending on a number of factors, including how many observers are used, their experience, and the quality of their viewing perspective. Undoubtedly the application of automated tracking technology to team sports has been hampered because of inadequate video and computational facilities available at sports venues. However, the complex nature of movement inherent to many physical activities also represents a significant hurdle to overcome. Athletes tend to exhibit quick and agile movements, with many unpredictable changes in direction and also frequent collisions with other players. Each of these characteristics of player behaviour violate the assumptions of smooth movement on which computer tracking algorithms are typically based. Systems such as TRAKUS, SoccerMan, TRAKPERFORMANCE, Pfinder and Prozone all provide extrinsic feedback information to coaches and athletes. However, commercial tracking systems still require a fair amount of operator intervention to process the data after capture and are often limited by the restricted capture environments that can be used and the necessity for individuals to wear tracking devices. Whilst some online tracking systems alleviate the requirements of manual tracking, to our knowledge a completely automated system suitable for sports performance is not yet commercially available. Automatic motion tracking has been used successfully in other domains outside of elite sport performance, notably for surveillance in the military and security industry where automatic recognition of moving objects is achievable because identification of the objects is not necessary. The current challenge is to obtain appropriate video sequences that can robustly identify and label people over time, in a cluttered environment containing multiple interacting people. This problem is often compounded by the quality of video capture, the relative size and occlusion frequency of people, and also changes in illumination. Potential applications of an automated motion detection system are offered, such as: planning tactics and strategies; measuring team organisation; providing meaningful kinematic feedback; and objective measures of intervention effectiveness in team sports, which could benefit coaches, players, and sports scientists.

  13. Tracking sentence planning and production.

    PubMed

    Kemper, Susan; Bontempo, Daniel; McKedy, Whitney; Schmalzried, RaLynn; Tagliaferri, Bruno; Kieweg, Doug

    2011-03-01

    To assess age differences in the costs of language planning and production. A controlled sentence production task was combined with digital pursuit rotor tracking. Participants were asked to track a moving target while formulating a sentence using specified nouns and verbs and to continue to track the moving target while producing their response. The length of the critical noun phrase (NP) as well as the type of verb provided were manipulated. The analysis indicated that sentence planning was more costly than sentence production, and sentence planning costs increased when participants had to incorporate a long NP into their sentence. The long NPs also tended to be shifted to the end of the sentence, whereas short NPs tended to be positioned after the verb. Planning or producing responses with long NPs was especially difficult for older adults, although verb type and NP shift had similar costs for young and older adults. Pursuit rotor tracking during controlled sentence production reveals the effects of aging on sentence planning and production.

  14. Adaptive object tracking via both positive and negative models matching

    NASA Astrophysics Data System (ADS)

    Li, Shaomei; Gao, Chao; Wang, Yawen

    2015-03-01

    To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as abinary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm can not only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.

  15. Interaction of railway vehicles with track in cross-winds

    NASA Astrophysics Data System (ADS)

    Xu, Y. L.; Ding, Q. S.

    2006-04-01

    This paper presents a framework for simulating railway vehicle and track interaction in cross-wind. Each 4-axle vehicle in a train is modeled by a 27-degree-of-freedom dynamic system. Two parallel rails of a track are modeled as two continuous beams supported by a discrete-elastic foundation of three layers with sleepers and ballasts included. The vehicle subsystem and the track subsystem are coupled through contacts between wheels and rails based on contact theory. Vertical and lateral rail irregularities simulated using an inverse Fourier transform are also taken into consideration. The simulation of steady and unsteady aerodynamic forces on a moving railway vehicle in cross-wind is then discussed in the time domain. The Hilber Hughes Taylor α-method is employed to solve the nonlinear equations of motion of coupled vehicle and track systems in cross-wind. The proposed framework is finally applied to a railway vehicle running on a straight track substructure in cross-wind. The safety and comfort performance of the moving vehicle in cross-wind are discussed. The results demonstrate that the proposed framework and the associated computer program can be used to investigate interaction problems of railway vehicles with track in cross-wind.

  16. Tracking cells in Life Cell Imaging videos using topological alignments.

    PubMed

    Mosig, Axel; Jäger, Stefan; Wang, Chaofeng; Nath, Sumit; Ersoy, Ilker; Palaniappan, Kannap-pan; Chen, Su-Shing

    2009-07-16

    With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells - many algorithms tend to recognize one cell as several cells or vice versa. We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS). The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.

  17. Studies of pointing, acquisition, and tracking of agile optical wireless transceivers for free-space optical communication networks

    NASA Astrophysics Data System (ADS)

    Ho, Tzung-Hsien; Trisno, Sugianto; Smolyaninov, Igor I.; Milner, Stuart D.; Davis, Christopher C.

    2004-02-01

    Free space, dynamic, optical wireless communications will require topology control for optimization of network performance. Such networks may need to be configured for bi- or multiple-connectedness, reliability and quality-of-service. Topology control involves the introduction of new links and/or nodes into the network to achieve such performance objectives through autonomous reconfiguration as well as precise pointing, acquisition, tracking, and steering of laser beams. Reconfiguration may be required because of link degradation resulting from obscuration or node loss. As a result, the optical transceivers may need to be re-directed to new or existing nodes within the network and tracked on moving nodes. The redirection of transceivers may require operation over a whole sphere, so that small-angle beam steering techniques cannot be applied. In this context, we are studying the performance of optical wireless links using lightweight, bi-static transceivers mounted on high-performance stepping motor driven stages. These motors provide an angular resolution of 0.00072 degree at up to 80,000 steps per second. This paper focuses on the performance characteristics of these agile transceivers for pointing, acquisition, and tracking (PAT), including the influence of acceleration/deceleration time, motor angular speed, and angular re-adjustment, on latency and packet loss in small free space optical (FSO) wireless test networks.

  18. Increasing the Reliability of Circulation Model Validation: Quantifying Drifter Slip to See how Currents are Actually Moving

    NASA Astrophysics Data System (ADS)

    Anderson, T.

    2016-02-01

    Ocean circulation forecasts can help answer questions regarding larval dispersal, passive movement of injured sea animals, oil spill mitigation, and search and rescue efforts. Circulation forecasts are often validated with GPS-tracked drifter paths, but how accurately do these drifters actually move with ocean currents? Drifters are not only moved by water, but are also forced by wind and waves acting on the exposed buoy and transmitter; this imperfect movement is referred to as drifter slip. The quantification and further understanding of drifter slip will allow scientists to differentiate between drifter imperfections and actual computer model error when comparing trajectory forecasts with actual drifter tracks. This will avoid falsely accrediting all discrepancies between a trajectory forecast and an actual drifter track to computer model error. During multiple deployments of drifters in Nantucket Sound and using observed wind and wave data, we attempt to quantify the slip of drifters developed by the Northeast Fisheries Science Center's (NEFSC) Student Drifters Program. While similar studies have been conducted previously, very few have directly attached current meters to drifters to quantify drifter slip. Furthermore, none have quantified slip of NEFSC drifters relative to the oceanographic-standard "CODE" drifter. The NEFSC drifter archive has over 1000 drifter tracks primarily off the New England coast. With a better understanding of NEFSC drifter slip, modelers can reliably use these tracks for model validation.

  19. Increasing the Reliability of Circulation Model Validation: Quantifying Drifter Slip to See how Currents are Actually Moving

    NASA Astrophysics Data System (ADS)

    Anderson, T.

    2015-12-01

    Ocean circulation forecasts can help answer questions regarding larval dispersal, passive movement of injured sea animals, oil spill mitigation, and search and rescue efforts. Circulation forecasts are often validated with GPS-tracked drifter paths, but how accurately do these drifters actually move with ocean currents? Drifters are not only moved by water, but are also forced by wind and waves acting on the exposed buoy and transmitter; this imperfect movement is referred to as drifter slip. The quantification and further understanding of drifter slip will allow scientists to differentiate between drifter imperfections and actual computer model error when comparing trajectory forecasts with actual drifter tracks. This will avoid falsely accrediting all discrepancies between a trajectory forecast and an actual drifter track to computer model error. During multiple deployments of drifters in Nantucket Sound and using observed wind and wave data, we attempt to quantify the slip of drifters developed by the Northeast Fisheries Science Center's (NEFSC) Student Drifters Program. While similar studies have been conducted previously, very few have directly attached current meters to drifters to quantify drifter slip. Furthermore, none have quantified slip of NEFSC drifters relative to the oceanographic-standard "CODE" drifter. The NEFSC drifter archive has over 1000 drifter tracks primarily off the New England coast. With a better understanding of NEFSC drifter slip, modelers can reliably use these tracks for model validation.

  20. Integrating motion, illumination, and structure in video sequences with applications in illumination-invariant tracking.

    PubMed

    Xu, Yilei; Roy-Chowdhury, Amit K

    2007-05-01

    In this paper, we present a theory for combining the effects of motion, illumination, 3D structure, albedo, and camera parameters in a sequence of images obtained by a perspective camera. We show that the set of all Lambertian reflectance functions of a moving object, at any position, illuminated by arbitrarily distant light sources, lies "close" to a bilinear subspace consisting of nine illumination variables and six motion variables. This result implies that, given an arbitrary video sequence, it is possible to recover the 3D structure, motion, and illumination conditions simultaneously using the bilinear subspace formulation. The derivation builds upon existing work on linear subspace representations of reflectance by generalizing it to moving objects. Lighting can change slowly or suddenly, locally or globally, and can originate from a combination of point and extended sources. We experimentally compare the results of our theory with ground truth data and also provide results on real data by using video sequences of a 3D face and the entire human body with various combinations of motion and illumination directions. We also show results of our theory in estimating 3D motion and illumination model parameters from a video sequence.

  1. Can Building Design Impact Physical Activity? A Natural Experiment.

    PubMed

    Eyler, Amy A; Hipp, Aaron; Valko, Cheryl Ann; Ramadas, Ramya; Zwald, Marissa

    2018-05-01

    Workplace design can impact workday physical activity (PA) and sedentary time. The purpose of this study was to evaluate PA behavior among university employees before and after moving into a new building. A pre-post, experimental versus control group study design was used. PA data were collected using surveys and accelerometers from university faculty and staff. Accelerometry was used to compare those moving into the new building (MOVERS) and those remaining in existing buildings (NONMOVERS) and from a control group (CONTROLS). Survey results showed increased self-reported PA for MOVERS and NONMOVERS. All 3 groups significantly increased in objectively collected daily energy expenditure and steps per day. The greatest steps per day increase was in CONTROLS (29.8%) compared with MOVERS (27.5%) and NONMOVERS (15.9%), but there were no significant differences between groups at pretest or posttest. Self-reported and objectively measured PA increased from pretest to posttest in all groups; thus, the increase cannot be attributed to the new building. Confounding factors may include contamination bias due to proximity of control site to experimental site and introduction of a university PA tracking contest during postdata collection. Methodology and results can inform future studies on best design practices for increasing PA.

  2. Closed loop tracked Doppler optical coherence tomography based heart monitor for the Drosophila melanogaster larvae.

    PubMed

    Zurauskas, Mantas; Bradu, Adrian; Ferguson, Daniel R; Hammer, Daniel X; Podoleanu, Adrian

    2016-03-01

    This paper presents a novel instrument for biosciences, useful for studies of moving embryos. A dual sequential imaging/measurement channel is assembled via a closed-loop tracking architecture. The dual channel system can operate in two regimes: (i) single-point Doppler signal monitoring or (ii) fast 3-D swept source OCT imaging. The system is demonstrated for characterizing cardiac dynamics in Drosophila melanogaster larva. Closed loop tracking enables long term in vivo monitoring of the larvae heart without anesthetic or physical restraint. Such an instrument can be used to measure subtle variations in the cardiac behavior otherwise obscured by the larvae movements. A fruit fly larva (top) was continuously tracked for continuous remote monitoring. A heartbeat trace of freely moving larva (bottom) was obtained by a low coherence interferometry based doppler sensing technique. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity

    PubMed Central

    Lomp, Oliver; Faubel, Christian; Schöner, Gregor

    2017-01-01

    Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145

  4. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  5. Bridging Theory and Practice in an Applied Retail Track

    ERIC Educational Resources Information Center

    Lange, Fredrik; Rosengren, Sara; Colliander, Jonas; Hernant, Mikael; Liljedal, Karina T.

    2018-01-01

    In this article, we present an educational approach that bridges theory and practice: an applied retail track. The track has been co-created by faculty and 10 partnering retail companies and runs in parallel with traditional courses during a 3-year bachelor's degree program in retail management. The underlying pedagogical concept is to move retail…

  6. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR TRACKING SYSTEM (UA-D-28.0)

    EPA Science Inventory

    The Arizona Border Study used a system that tracks what occurs to a sample and provides the status of that sample at any given time. In essence, the tracking system provides an electronic chain of custody record for each sample as it moves through the project. This is achieved ...

  7. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR TRACKING SYSTEM (UA-D-28.0)

    EPA Science Inventory

    The NHEXAS Arizona project designed a system that tracks what occurs to a sample and provides the status of that sample at any given time. In essence, the tracking system provides an electronic chain of custody record for each sample as it moves through the project. This is ach...

  8. Experiments on shape perception in stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Leroy, Laure; Fuchs, Philippe; Paljic, Alexis; Moreau, Guillaume

    2009-02-01

    Stereoscopic displays are increasingly used for computer-aided design. The aim is to make virtual prototypes to avoid building real ones, so that time, money and raw materials are saved. But do we really know whether virtual displays render the objects in a realistic way to potential users? In this study, we have performed several experiments in which we compare two virtual shapes to their equivalent in the real world, each of these aiming at a specific issue by a comparison: First, we performed some perception tests to evaluate the importance of head tracking to evaluate if it is better to concentrate our efforts on stereoscopic vision; Second, we have studied the effects of interpupillary distance; Third, we studied the effects of the position of the main object in comparison with the screen. Two different tests are used, the first one using a well-known shape (a sphere) and the second one using an irregular shape but with almost the same colour and dimension. These two tests allow us to determine if symmetry is important in their perception. We show that head tracking has a more important effect on shape perception than stereoscopic vision, especially on depth perception because the subject is able to move around the scene. The study also shows that an object between the subject and the screen is perceived better than an object which is on the screen, even if the latter is better for the eye strain.

  9. Spatial distance effects on incremental semantic interpretation of abstract sentences: evidence from eye tracking.

    PubMed

    Guerra, Ernesto; Knoeferle, Pia

    2014-12-01

    A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants' reading times for sentences that convey similarity or difference between two abstract nouns (e.g., 'Peace and war are certainly different...'). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., 'peace', 'war'). In Experiments 2 and 3, they turned but remained blank. Participants' reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.

    PubMed

    Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin

    2018-06-22

    Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.

  11. Tracking and recognition of multiple human targets moving in a wireless pyroelectric infrared sensor network.

    PubMed

    Xiong, Ji; Li, Fangmin; Zhao, Ning; Jiang, Na

    2014-04-22

    With characteristics of low-cost and easy deployment, the distributed wireless pyroelectric infrared sensor network has attracted extensive interest, which aims to make it an alternate infrared video sensor in thermal biometric applications for tracking and identifying human targets. In these applications, effectively processing signals collected from sensors and extracting the features of different human targets has become crucial. This paper proposes the application of empirical mode decomposition and the Hilbert-Huang transform to extract features of moving human targets both in the time domain and the frequency domain. Moreover, the support vector machine is selected as the classifier. The experimental results demonstrate that by using this method the identification rates of multiple moving human targets are around 90%.

  12. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  13. Teaching braille line tracking using stimulus fading.

    PubMed

    Scheithauer, Mindy C; Tiger, Jeffrey H

    2014-01-01

    Line tracking is a prerequisite skill for braille literacy that involves moving one's finger horizontally across a line of braille text and identifying when a line ends so the reader may reset his or her finger on the subsequent line. Current procedures for teaching line tracking are incomplete, because they focus on tracking lines with only small gaps between characters. The current study extended previous line-tracking instruction using stimulus fading to teach tracking across larger gaps. After instruction, all participants showed improvement in line tracking, and 2 of 3 participants met mastery criteria for tracking across extended spaces. © Society for the Experimental Analysis of Behavior.

  14. Development of online use of theory of mind during adolescence: An eye-tracking study.

    PubMed

    Symeonidou, Irene; Dumontheil, Iroise; Chow, Wing-Yee; Breheny, Richard

    2016-09-01

    We investigated the development of theory of mind use through eye-tracking in children (9-13years old, n=14), adolescents (14-17.9years old, n=28), and adults (19-29years old, n=23). Participants performed a computerized task in which a director instructed them to move objects placed on a set of shelves. Some of the objects were blocked off from the director's point of view; therefore, participants needed to take into consideration the director's ignorance of these objects when following the director's instructions. In a control condition, participants performed the same task in the absence of the director and were told that the instructions would refer only to items in slots without a back panel, controlling for general cognitive demands of the task. Participants also performed two inhibitory control tasks. We replicated previous findings, namely that in the director-present condition, but not in the control condition, children and adolescents made more errors than adults, suggesting that theory of mind use improves between adolescence and adulthood. Inhibitory control partly accounted for errors on the director task, indicating that it is a factor of developmental change in perspective taking. Eye-tracking data revealed early eye gaze differences between trials where the director's perspective was taken into account and those where it was not. Once differences in accuracy rates were considered, all age groups engaged in the same kind of online processing during perspective taking but differed in how often they engaged in perspective taking. When perspective is correctly taken, all age groups' gaze data point to an early influence of perspective information. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  16. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  17. Detection and tracking of human targets in indoor and urban environments using through-the-wall radar sensors

    NASA Astrophysics Data System (ADS)

    Radzicki, Vincent R.; Boutte, David; Taylor, Paul; Lee, Hua

    2017-05-01

    Radar based detection of human targets behind walls or in dense urban environments is an important technical challenge with many practical applications in security, defense, and disaster recovery. Radar reflections from a human can be orders of magnitude weaker than those from objects encountered in urban settings such as walls, cars, or possibly rubble after a disaster. Furthermore, these objects can act as secondary reflectors and produce multipath returns from a person. To mitigate these issues, processing of radar return data needs to be optimized for recognizing human motion features such as walking, running, or breathing. This paper presents a theoretical analysis on the modulation effects human motion has on the radar waveform and how high levels of multipath can distort these motion effects. From this analysis, an algorithm is designed and optimized for tracking human motion in heavily clutter environments. The tracking results will be used as the fundamental detection/classification tool to discriminate human targets from others by identifying human motion traits such as predictable walking patterns and periodicity in breathing rates. The theoretical formulations will be tested against simulation and measured data collected using a low power, portable see-through-the-wall radar system that could be practically deployed in real-world scenarios. Lastly, the performance of the algorithm is evaluated in a series of experiments where both a single person and multiple people are moving in an indoor, cluttered environment.

  18. Development of a two photon microscope for tracking Drosophila larvae

    NASA Astrophysics Data System (ADS)

    Karagyozov, Doycho; Mihovilovic Skanata, Mirna; Gershow, Marc

    Current in vivo methods for measuring neural activity in Drosophila larva require immobilization of the animal. Although we can record neural signals while stimulating the sensory organs, we cannot read the behavioral output because we have prevented the animal from moving. Many research questions cannot be answered without observation of neural activity in behaving (freely-moving) animals. Our project aims to develop a tracking microscope that maintains the neurons of interest in the field of view and in focus during the rapid three dimensional motion of a free larva.

  19. A data set for evaluating the performance of multi-class multi-object video tracking

    NASA Astrophysics Data System (ADS)

    Chakraborty, Avishek; Stamatescu, Victor; Wong, Sebastien C.; Wigley, Grant; Kearney, David

    2017-05-01

    One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.

  20. Self-Motion Impairs Multiple-Object Tracking

    ERIC Educational Resources Information Center

    Thomas, Laura E.; Seiffert, Adriane E.

    2010-01-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement…

  1. Actin-based motility propelled by molecular motors

    NASA Astrophysics Data System (ADS)

    Upadyayula, Sai Pramod; Rangarajan, Murali

    2012-09-01

    Actin-based motility of Listeria monocytogenes propelled by filament end-tracking molecular motors has been simulated. Such systems may act as potential nanoscale actuators and shuttles useful in sorting and sensing biomolecules. Filaments are modeled as three-dimensional elastic springs distributed on one end of the capsule and persistently attached to the motile bacterial surface through an end-tracking motor complex. Filament distribution is random, and monomer concentration decreases linearly as a function of position on the bacterial surface. Filament growth rate increases with monomer concentration but decreases with the extent of compression. The growing filaments exert push-pull forces on the bacterial surface. In addition to forces, torques arise due to two factors—distribution of motors on the bacterial surface, and coupling of torsion upon growth due to the right-handed helicity of F-actin—causing the motile object to undergo simultaneous translation and rotation. The trajectory of the bacterium is simulated by performing a force and torque balance on the bacterium. All simulations use a fixed value of torsion. Simulations show strong alignment of the filaments and the long axis of the bacterium along the direction of motion. In the absence of torsion, the bacterial surface essentially moves along the direction of the long axis. When a small amount of the torsion is applied to the bacterial surface, the bacterium is seen to move in right-handed helical trajectories, consistent with experimental observations.

  2. Using extant taxa to inform studies of fossil footprints

    NASA Astrophysics Data System (ADS)

    Falkingham, Peter; Gatesy, Stephen

    2016-04-01

    Attempting to use the fossilized footprints of extinct animals to study their palaeobiology and palaeoecology is notoriously difficult. The inconvenient extinction of the trackmaker makes direct correlation between footprints and foot far from straightforward. However, footprints are the only direct evidence of vertebrate motion recorded in the fossil record, and are potentially a source of data on palaeobiology that cannot be obtained from osteological remains alone. Our interests lie in recovering information about the movements of dinosaurs from their tracks. In particular, the Hitchcock collection of early Jurassic tracks held at the Beneski Museum of Natural History, Amherst, provide a rare look into the 3D form of tracks at and below the surface the animal walked on. Breaking naturally along laminations into 'track books', the specimens present sediment deformation at multiple levels, and in doing so record more of the foot's motion than a single surface might. In order to utilize this rich information source to study the now extinct trackmakers, the process of track formation must be understood at a fundamental level; the interaction of the moving foot and compliant substrate. We used bi-planar X-ray techniques (X-ray Reconstruction of Moving Morphology) to record the limb and foot motions of a Guineafowl traversing both granular and cohesive substrates. This data was supplemented with photogrammetric records of the resultant track surfaces, as well as the motion of metal beads within the sediment, to provide a full experimental dataset of foot and footprint formation. The physical experimental data was used to generate computer simulations of the process using high performance computing and the Discrete Element Method. The resultant simulations showed excellent congruence with reality, and enabled visualization within the sediment volume, and throughout the track-forming process. This physical and virtual experimental set-up has provided major insight into how to interpret the track-books within the Amherst Collection, and as such begin to understand how these early Jurassic dinosaurs moved. More broadly, this complete view of track formation afforded by experimental techniques will aid in interpretation of fossil vertebrate tracks throughout the fossil record.

  3. Visual object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)

    2010-01-01

    This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.

  4. Structure preserving clustering-object tracking via subgroup motion pattern segmentation

    NASA Astrophysics Data System (ADS)

    Fan, Zheyi; Zhu, Yixuan; Jiang, Jiao; Weng, Shuqin; Liu, Zhiwen

    2018-01-01

    Tracking clustering objects with similar appearances simultaneously in collective scenes is a challenging task in the field of collective motion analysis. Recent work on clustering-object tracking often suffers from poor tracking accuracy and terrible real-time performance due to the neglect or the misjudgment of the motion differences among objects. To address this problem, we propose a subgroup motion pattern segmentation framework based on a multilayer clustering structure and establish spatial constraints only among objects in the same subgroup, which entails having consistent motion direction and close spatial position. In addition, the subgroup segmentation results are updated dynamically because crowd motion patterns are changeable and affected by objects' destinations and scene structures. The spatial structure information combined with the appearance similarity information is used in the structure preserving object tracking framework to track objects. Extensive experiments conducted on several datasets containing multiple real-world crowd scenes validate the accuracy and the robustness of the presented algorithm for tracking objects in collective scenes.

  5. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  6. Multiple capture locations for 3D ultrasound-guided robotic retrieval of moving bodies from a beating heart

    NASA Astrophysics Data System (ADS)

    Thienphrapa, Paul; Ramachandran, Bharat; Elhawary, Haytham; Taylor, Russell H.; Popovic, Aleksandra

    2012-02-01

    Free moving bodies in the heart pose a serious health risk as they may be released in the arteries causing blood flow disruption. These bodies may be the result of various medical conditions and trauma. The conventional approach to removing these objects involves open surgery with sternotomy, the use of cardiopulmonary bypass, and a wide resection of the heart muscle. We advocate a minimally invasive surgical approach using a flexible robotic end effector guided by 3D transesophageal echocardiography. In a phantom study, we track a moving body in a beating heart using a modified normalized cross-correlation method, with mean RMS errors of 2.3 mm. We previously found the foreign body motion to be fast and abrupt, rendering infeasible a retrieval method based on direct tracking. We proposed a strategy based on guiding a robot to the most spatially probable location of the fragment and securing it upon its reentry to said location. To improve efficacy in the context of a robotic retrieval system, we extend this approach by exploring multiple candidate capture locations. Salient locations are identified based on spatial probability, dwell time, and visit frequency; secondary locations are also examined. Aggregate results indicate that the location of highest spatial probability (50% occupancy) is distinct from the longest-dwelled location (0.84 seconds). Such metrics are vital in informing the design of a retrieval system and capture strategies, and they can be computed intraoperatively to select the best capture location based on constraints such as workspace, time, and device manipulability. Given the complex nature of fragment motion, the ability to analyze multiple capture locations is a desirable capability in an interventional system.

  7. Laser Range and Bearing Finder with No Moving Parts

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.

    2007-01-01

    A proposed laser-based instrument would quickly measure the approximate distance and approximate direction to the closest target within its field of view. The instrument would not contain any moving parts and its mode of operation would not entail scanning over of its field of view. Typically, the instrument would be used to locate a target at a distance on the order of meters to kilometers. The instrument would be best suited for use in an uncluttered setting in which the target is the only or, at worst, the closest object in the vicinity; for example, it could be used aboard an aircraft to detect and track another aircraft flying nearby. The proposed instrument would include a conventional time-of-flight or echo-phase-shift laser range finder, but unlike most other range finders, this one would not generate a narrow cylindrical laser beam; instead, it would generate a conical laser beam spanning the field of view. The instrument would also include a quadrant detector, optics to focus the light returning from the target onto the quadrant detector, and circuitry to synchronize the acquisition of the quadrant-detector output with the arrival of laser light returning from the nearest target. A quadrant detector constantly gathers information from the entire field of view, without scanning; its output is a direct measure of the position of the target-return light spot on the focal plane and is thus a measure of the direction to the target. The instrument should be able to operate at a repetition rate high enough to enable it to track a rapidly moving target. Of course, a target that is not sufficiently reflective could not be located by this instrument. Preferably, retroreflectors should be attached to the target to make it sufficiently reflective.

  8. Dynamic Object Representations in Infants with and without Fragile X Syndrome

    PubMed Central

    Farzin, Faraz; Rivera, Susan M.

    2009-01-01

    Our visual world is dynamic in nature. The ability to encode, mentally represent, and track an object's identity as it moves across time and space is critical for integrating and maintaining a complete and coherent view of the world. Here we investigated dynamic object processing in typically developing (TD) infants and infants with fragile X syndrome (FXS), a single-gene disorder associated with deficits in dorsal stream functioning. We used the violation of expectation method to assess infants’ visual response to expected versus unexpected outcomes following a brief dynamic (dorsal stream) or static (ventral stream) occlusion event. Consistent with previous reports of deficits in dorsal stream-mediated functioning in individuals with this disorder, these results reveal that, compared to mental age-matched TD infants, infants with FXS could maintain the identity of static, but not dynamic, object information during occlusion. These findings are the first to experimentally evaluate visual object processing skills in infants with FXS, and further support the hypothesis of dorsal stream difficulties in infants with this developmental disorder. PMID:20224809

  9. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  10. Dosimetry of heavy ions by use of CCD detectors

    NASA Technical Reports Server (NTRS)

    Schott, J. U.

    1994-01-01

    The design and the atomic composition of Charge Coupled Devices (CCD's) make them unique for investigations of single energetic particle events. As detector system for ionizing particles they detect single particles with local resolution and near real time particle tracking. In combination with its properties as optical sensor, particle transversals of single particles are to be correlated to any objects attached to the light sensitive surface of the sensor by simple imaging of their shadow and subsequent image analysis of both, optical image and particle effects, observed in affected pixels. With biological objects it is possible for the first time to investigate effects of single heavy ions in tissue or extinguished organs of metabolizing (i.e. moving) systems with a local resolution better than 15 microns. Calibration data for particle detection in CCD's are presented for low energetic protons and heavy ions.

  11. Within-Hemifield Competition in Early Visual Areas Limits the Ability to Track Multiple Objects with Attention

    PubMed Central

    Alvarez, George A.; Cavanagh, Patrick

    2014-01-01

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651

  12. Supercavitating Projectile Tracking System and Method

    DTIC Science & Technology

    2009-12-30

    Distribution is unlimited 20100104106 Attorney Docket No. 96681 SUPERCAVITATING PROJECTILE TRACKING SYSTEM AND METHOD STATEMENT OF GOVERNMENT...underwater track or path 14 of a supercavitating vehicle under surface 16 of a body of water. In this embodiment, passive acoustic or pressure...transducers 12 are utilized to measure a pressure field produced by a moving supercavitating vehicle. The present invention provides a low-cost, reusable

  13. A study on the theoretical and practical accuracy of conoscopic holography-based surface measurements: toward image registration in minimally invasive surgery†

    PubMed Central

    Burgner, J.; Simpson, A. L.; Fitzpatrick, J. M.; Lathrop, R. A.; Herrell, S. D.; Miga, M. I.; Webster, R. J.

    2013-01-01

    Background Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. Methods We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. Results Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. Conclusions Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery. PMID:22761086

  14. Contrast, contours and the confusion effect in dazzle camouflage.

    PubMed

    Hogan, Benedict G; Scott-Samuel, Nicholas E; Cuthill, Innes C

    2016-07-01

    'Motion dazzle camouflage' is the name for the putative effects of highly conspicuous, often repetitive or complex, patterns on parameters important in prey capture, such as the perception of speed, direction and identity. Research into motion dazzle camouflage is increasing our understanding of the interactions between visual tracking, the confusion effect and defensive coloration. However, there is a paucity of research into the effects of contrast on motion dazzle camouflage: is maximal contrast a prerequisite for effectiveness? If not, this has important implications for our recognition of the phenotype and understanding of the function and mechanisms of potential motion dazzle camouflage patterns. Here we tested human participants' ability to track one moving target among many identical distractors with surface patterns designed to test the influence of these factors. In line with previous evidence, we found that targets with stripes parallel to the object direction of motion were hardest to track. However, reduction in contrast did not significantly influence this result. This finding may bring into question the utility of current definitions of motion dazzle camouflage, and means that some animal patterns, such as aposematic or mimetic stripes, may have previously unrecognized multiple functions.

  15. Flow detection via sparse frame analysis for suspicious event recognition in infrared imagery

    NASA Astrophysics Data System (ADS)

    Fernandes, Henrique C.; Batista, Marcos A.; Barcelos, Celia A. Z.; Maldague, Xavier P. V.

    2013-05-01

    It is becoming increasingly evident that intelligent systems are very bene¯cial for society and that the further development of such systems is necessary to continue to improve society's quality of life. One area that has drawn the attention of recent research is the development of automatic surveillance systems. In our work we outline a system capable of monitoring an uncontrolled area (an outside parking lot) using infrared imagery and recognizing suspicious events in this area. The ¯rst step is to identify moving objects and segment them from the scene's background. Our approach is based on a dynamic background-subtraction technique which robustly adapts detection to illumination changes. It is analyzed only regions where movement is occurring, ignoring in°uence of pixels from regions where there is no movement, to segment moving objects. Regions where movement is occurring are identi¯ed using °ow detection via sparse frame analysis. During the tracking process the objects are classi¯ed into two categories: Persons and Vehicles, based on features such as size and velocity. The last step is to recognize suspicious events that may occur in the scene. Since the objects are correctly segmented and classi¯ed it is possible to identify those events using features such as velocity and time spent motionless in one spot. In this paper we recognize the suspicious event suspicion of object(s) theft from inside a parked vehicle at spot X by a person" and results show that the use of °ow detection increases the recognition of this suspicious event from 78:57% to 92:85%.

  16. B-spline based image tracking by detection

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman

    2016-05-01

    Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.

  17. Symmetric caging formation for convex polygonal object transportation by multiple mobile robots based on fuzzy sliding mode control.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2016-01-01

    In this paper, the problem of object caging and transporting is considered for multiple mobile robots. With the consideration of minimizing the number of robots and decreasing the rotation of the object, the proper points are calculated and assigned to the multiple mobile robots to allow them to form a symmetric caging formation. The caging formation guarantees that all of the Euclidean distances between any two adjacent robots are smaller than the minimal width of the polygonal object so that the object cannot escape. In order to avoid collision among robots, the parameter of the robots radius is utilized to design the caging formation, and the A⁎ algorithm is used so that mobile robots can move to the proper points. In order to avoid obstacles, the robots and the object are regarded as a rigid body to apply artificial potential field method. The fuzzy sliding mode control method is applied for tracking control of the nonholonomic mobile robots. Finally, the simulation and experimental results show that multiple mobile robots are able to cage and transport the polygonal object to the goal position, avoiding obstacles. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A tracked robot with novel bio-inspired passive "legs".

    PubMed

    Sun, Bo; Jing, Xingjian

    2017-01-01

    For track-based robots, an important aspect is the suppression design, which determines the trafficability and comfort of the whole system. The trafficability limits the robot's working capability, and the riding comfort limits the robot's working effectiveness, especially with some sensitive instruments mounted on or operated. To these aims, a track-based robot equipped with a novel passive bio-inspired suspension is designed and studied systematically in this paper. Animal or insects have very special leg or limb structures which are good for motion control and adaptable to different environments. Inspired by this, a new track-based robot is designed with novel "legs" for connecting the loading wheels to the robot body. Each leg is designed with passive structures and can achieve very high loading capacity but low dynamic stiffness such that the robot can move on rough ground similar to a multi-leg animal or insect. Therefore, the trafficability and riding comfort can be significantly improved without losing loading capacity. The new track-based robot can be well applied to various engineering tasks for providing a stable moving platform of high mobility, better trafficability and excellent loading capacity.

  19. Multiple object tracking with non-unique data-to-object association via generalized hypothesis testing. [tracking several aircraft near each other or ships at sea

    NASA Technical Reports Server (NTRS)

    Porter, D. W.; Lefler, R. M.

    1979-01-01

    A generalized hypothesis testing approach is applied to the problem of tracking several objects where several different associations of data with objects are possible. Such problems occur, for instance, when attempting to distinctly track several aircraft maneuvering near each other or when tracking ships at sea. Conceptually, the problem is solved by first, associating data with objects in a statistically reasonable fashion and then, tracking with a bank of Kalman filters. The objects are assumed to have motion characterized by a fixed but unknown deterministic portion plus a random process portion modeled by a shaping filter. For example, the object might be assumed to have a mean straight line path about which it maneuvers in a random manner. Several hypothesized associations of data with objects are possible because of ambiguity as to which object the data comes from, false alarm/detection errors, and possible uncertainty in the number of objects being tracked. The statistical likelihood function is computed for each possible hypothesized association of data with objects. Then the generalized likelihood is computed by maximizing the likelihood over parameters that define the deterministic motion of the object.

  20. Neural basis for dynamic updating of object representation in visual working memory.

    PubMed

    Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun

    2010-02-15

    In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.

Top