Sample records for real-time object tracking

  1. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    PubMed Central

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  2. Hardware accelerator design for tracking in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  3. Compressed multi-block local binary pattern for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao

    2018-04-01

    Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.

  4. Real-time object detection, tracking and occlusion reasoning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divakaran, Ajay; Yu, Qian; Tamrakar, Amir

    A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.

  5. Multiple objects tracking with HOGs matching in circular windows

    NASA Astrophysics Data System (ADS)

    Miramontes-Jaramillo, Daniel; Kober, Vitaly; Díaz-Ramírez, Víctor H.

    2014-09-01

    In recent years tracking applications with development of new technologies like smart TVs, Kinect, Google Glass and Oculus Rift become very important. When tracking uses a matching algorithm, a good prediction algorithm is required to reduce the search area for each object to be tracked as well as processing time. In this work, we analyze the performance of different tracking algorithms based on prediction and matching for a real-time tracking multiple objects. The used matching algorithm utilizes histograms of oriented gradients. It carries out matching in circular windows, and possesses rotation invariance and tolerance to viewpoint and scale changes. The proposed algorithm is implemented in a personal computer with GPU, and its performance is analyzed in terms of processing time in real scenarios. Such implementation takes advantage of current technologies and helps to process video sequences in real-time for tracking several objects at the same time.

  6. Real-time visual tracking of less textured three-dimensional objects on mobile platforms

    NASA Astrophysics Data System (ADS)

    Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2012-12-01

    Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.

  7. Real time eye tracking using Kalman extended spatio-temporal context learning

    NASA Astrophysics Data System (ADS)

    Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu

    2017-06-01

    Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.

  8. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  9. Real-time optical holographic tracking of multiple objects

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1989-01-01

    A coherent optical correlation technique for real-time simultaneous tracking of several different objects making independent movements is described, and experimental results are presented. An evaluation of this system compared with digital computing systems is made. The real-time processing capability is obtained through the use of a liquid crystal television spatial light modulator and a dichromated gelatin multifocus hololens. A coded reference beam is utilized in the separation of the output correlation plane associated with each input target so that independent tracking can be achieved.

  10. Nonstationary EO/IR Clutter Suppression and Dim Object Tracking

    DTIC Science & Technology

    2010-01-01

    Brown, A., and Brown, J., Enhanced Algorithms for EO /IR Electronic Stabilization, Clutter Suppression, and Track - Before - Detect for Multiple Low...estimation-suppression and nonlinear filtering-based multiple-object track - before - detect . These algorithms are suitable for integration into...In such cases, it is imperative to develop efficient real or near-real time tracking before detection methods. This paper continues the work started

  11. An efficient sequential approach to tracking multiple objects through crowds for real-time intelligent CCTV systems.

    PubMed

    Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi

    2008-10-01

    Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.

  12. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  13. Using LabView for real-time monitoring and tracking of multiple biological objects

    NASA Astrophysics Data System (ADS)

    Nikolskyy, Aleksandr I.; Krasilenko, Vladimir G.; Bilynsky, Yosyp Y.; Starovier, Anzhelika

    2017-04-01

    Today real-time studying and tracking of movement dynamics of various biological objects is important and widely researched. Features of objects, conditions of their visualization and model parameters strongly influence the choice of optimal methods and algorithms for a specific task. Therefore, to automate the processes of adaptation of recognition tracking algorithms, several Labview project trackers are considered in the article. Projects allow changing templates for training and retraining the system quickly. They adapt to the speed of objects and statistical characteristics of noise in images. New functions of comparison of images or their features, descriptors and pre-processing methods will be discussed. The experiments carried out to test the trackers on real video files will be presented and analyzed.

  14. Real-time Human Activity Recognition

    NASA Astrophysics Data System (ADS)

    Albukhary, N.; Mustafah, Y. M.

    2017-11-01

    The traditional Closed-circuit Television (CCTV) system requires human to monitor the CCTV for 24/7 which is inefficient and costly. Therefore, there’s a need for a system which can recognize human activity effectively in real-time. This paper concentrates on recognizing simple activity such as walking, running, sitting, standing and landing by using image processing techniques. Firstly, object detection is done by using background subtraction to detect moving object. Then, object tracking and object classification are constructed so that different person can be differentiated by using feature detection. Geometrical attributes of tracked object, which are centroid and aspect ratio of identified tracked are manipulated so that simple activity can be detected.

  15. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  16. Real-time optical multiple object recognition and tracking system and method

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)

    1987-01-01

    The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.

  17. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  18. Real-time Astrometry Using Phase Congruency

    NASA Astrophysics Data System (ADS)

    Lambert, A.; Polo, M.; Tang, Y.

    Phase congruency is a computer vision technique that proves to perform well for determining the tracks of optical objects (Flewelling, AMOS 2014). We report on a real-time implementation of this using an FPGA and CMOS Image Sensor, with on-sky data. The lightweight instrument can provide tracking update signals to the mount of the telescope, as well as determine abnormal objects in the scene.

  19. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  20. MO-FG-BRD-01: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: Introduction and KV Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, B.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  1. MO-FG-BRD-04: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, D.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  2. MO-FG-BRD-02: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MV Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berbeco, R.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  3. MO-FG-BRD-03: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: EM Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keall, P.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  4. MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  5. Structure preserving clustering-object tracking via subgroup motion pattern segmentation

    NASA Astrophysics Data System (ADS)

    Fan, Zheyi; Zhu, Yixuan; Jiang, Jiao; Weng, Shuqin; Liu, Zhiwen

    2018-01-01

    Tracking clustering objects with similar appearances simultaneously in collective scenes is a challenging task in the field of collective motion analysis. Recent work on clustering-object tracking often suffers from poor tracking accuracy and terrible real-time performance due to the neglect or the misjudgment of the motion differences among objects. To address this problem, we propose a subgroup motion pattern segmentation framework based on a multilayer clustering structure and establish spatial constraints only among objects in the same subgroup, which entails having consistent motion direction and close spatial position. In addition, the subgroup segmentation results are updated dynamically because crowd motion patterns are changeable and affected by objects' destinations and scene structures. The spatial structure information combined with the appearance similarity information is used in the structure preserving object tracking framework to track objects. Extensive experiments conducted on several datasets containing multiple real-world crowd scenes validate the accuracy and the robustness of the presented algorithm for tracking objects in collective scenes.

  6. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  7. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  8. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.

  9. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  10. Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.

    PubMed

    Quesada, Luis; León, Alejandro J

    2012-10-01

    Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.

  11. Real-time acquisition and tracking system with multiple Kalman filters

    NASA Astrophysics Data System (ADS)

    Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.

    1994-07-01

    The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.

  12. Feasibility of real-time location systems in monitoring recovery after major abdominal surgery.

    PubMed

    Dorrell, Robert D; Vermillion, Sarah A; Clark, Clancy J

    2017-12-01

    Early mobilization after major abdominal surgery decreases postoperative complications and length of stay, and has become a key component of enhanced recovery pathways. However, objective measures of patient movement after surgery are limited. Real-time location systems (RTLS), typically used for asset tracking, provide a novel approach to monitoring in-hospital patient activity. The current study investigates the feasibility of using RTLS to objectively track postoperative patient mobilization. The real-time location system employs a meshed network of infrared and RFID sensors and detectors that sample device locations every 3 s resulting in over 1 million data points per day. RTLS tracking was evaluated systematically in three phases: (1) sensitivity and specificity of the tracking device using simulated patient scenarios, (2) retrospective passive movement analysis of patient-linked equipment, and (3) prospective observational analysis of a patient-attached tracking device. RTLS tracking detected a simulated movement out of a room with sensitivity of 91% and specificity 100%. Specificity decreased to 75% if time out of room was less than 3 min. All RTLS-tagged patient-linked equipment was identified for 18 patients, but measurable patient movement associated with equipment was detected for only 2 patients (11%) with 1-8 out-of-room walks per day. Ten patients were prospectively monitored using RTLS badges following major abdominal surgery. Patient movement was recorded using patient diaries, direct observation, and an accelerometer. Sensitivity and specificity of RTLS patient tracking were both 100% in detecting out-of-room ambulation and correlated well with direct observation and patient-reported ambulation. Real-time location systems are a novel technology capable of objectively and accurately monitoring patient movement and provide an innovative approach to promoting early mobilization after surgery.

  13. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    PubMed

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  14. A Real-Time Orbit Determination Method for Smooth Transition from Optical Tracking to Laser Ranging of Debris

    PubMed Central

    Li, Bin; Sang, Jizhang; Zhang, Zhongping

    2016-01-01

    A critical requirement to achieve high efficiency of debris laser tracking is to have sufficiently accurate orbit predictions (OP) in both the pointing direction (better than 20 arc seconds) and distance from the tracking station to the debris objects, with the former more important than the latter because of the narrow laser beam. When the two line element (TLE) is used to provide the orbit predictions, the resultant pointing errors are usually on the order of tens to hundreds of arc seconds. In practice, therefore, angular observations of debris objects are first collected using an optical tracking sensor, and then used to guide the laser beam pointing to the objects. The manual guidance may cause interrupts to the laser tracking, and consequently loss of valuable laser tracking data. This paper presents a real-time orbit determination (OD) and prediction method to realize smooth and efficient debris laser tracking. The method uses TLE-computed positions and angles over a short-arc of less than 2 min as observations in an OD process where simplified force models are considered. After the OD convergence, the OP is performed from the last observation epoch to the end of the tracking pass. Simulation and real tracking data processing results show that the pointing prediction errors are usually less than 10″, and the distance errors less than 100 m, therefore, the prediction accuracy is sufficient for the blind laser tracking. PMID:27347958

  15. Action-Driven Visual Object Tracking With Deep Reinforcement Learning.

    PubMed

    Yun, Sangdoo; Choi, Jongwon; Yoo, Youngjoon; Yun, Kimin; Choi, Jin Young

    2018-06-01

    In this paper, we propose an efficient visual tracker, which directly captures a bounding box containing the target object in a video by means of sequential actions learned using deep neural networks. The proposed deep neural network to control tracking actions is pretrained using various training video sequences and fine-tuned during actual tracking for online adaptation to a change of target and background. The pretraining is done by utilizing deep reinforcement learning (RL) as well as supervised learning. The use of RL enables even partially labeled data to be successfully utilized for semisupervised learning. Through the evaluation of the object tracking benchmark data set, the proposed tracker is validated to achieve a competitive performance at three times the speed of existing deep network-based trackers. The fast version of the proposed method, which operates in real time on graphics processing unit, outperforms the state-of-the-art real-time trackers with an accuracy improvement of more than 8%.

  16. Development of a real time multiple target, multi camera tracker for civil security applications

    NASA Astrophysics Data System (ADS)

    Åkerlund, Hans

    2009-09-01

    A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.

  17. Real-time moving objects detection and tracking from airborne infrared camera

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.

  18. Adaptive learning compressive tracking based on Markov location prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan

    2017-03-01

    Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.

  19. Object tracking with adaptive HOG detector and adaptive Rao-Blackwellised particle filter

    NASA Astrophysics Data System (ADS)

    Rosa, Stefano; Paleari, Marco; Ariano, Paolo; Bona, Basilio

    2012-01-01

    Scenarios for a manned mission to the Moon or Mars call for astronaut teams to be accompanied by semiautonomous robots. A prerequisite for human-robot interaction is the capability of successfully tracking humans and objects in the environment. In this paper we present a system for real-time visual object tracking in 2D images for mobile robotic systems. The proposed algorithm is able to specialize to individual objects and to adapt to substantial changes in illumination and object appearance during tracking. The algorithm is composed by two main blocks: a detector based on Histogram of Oriented Gradient (HOG) descriptors and linear Support Vector Machines (SVM), and a tracker which is implemented by an adaptive Rao-Blackwellised particle filter (RBPF). The SVM is re-trained online on new samples taken from previous predicted positions. We use the effective sample size to decide when the classifier needs to be re-trained. Position hypotheses for the tracked object are the result of a clustering procedure applied on the set of particles. The algorithm has been tested on challenging video sequences presenting strong changes in object appearance, illumination, and occlusion. Experimental tests show that the presented method is able to achieve near real-time performances with a precision of about 7 pixels on standard video sequences of dimensions 320 × 240.

  20. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.

    PubMed

    Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-08-23

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  1. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    PubMed Central

    Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-01-01

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520

  2. Laser vision seam tracking system based on image processing and continuous convolution operator tracker

    NASA Astrophysics Data System (ADS)

    Zou, Yanbiao; Chen, Tao

    2018-06-01

    To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.

  3. Target tracking and surveillance by fusing stereo and RFID information

    NASA Astrophysics Data System (ADS)

    Raza, Rana H.; Stockman, George C.

    2012-06-01

    Ensuring security in high risk areas such as an airport is an important but complex problem. Effectively tracking personnel, containers, and machines is a crucial task. Moreover, security and safety require understanding the interaction of persons and objects. Computer vision (CV) has been a classic tool; however, variable lighting, imaging, and random occlusions present difficulties for real-time surveillance, resulting in erroneous object detection and trajectories. Determining object ID via CV at any instance of time in a crowded area is computationally prohibitive, yet the trajectories of personnel and objects should be known in real time. Radio Frequency Identification (RFID) can be used to reliably identify target objects and can even locate targets at coarse spatial resolution, while CV provides fuzzy features for target ID at finer resolution. Our research demonstrates benefits obtained when most objects are "cooperative" by being RFID tagged. Fusion provides a method to simplify the correspondence problem in 3D space. A surveillance system can query for unique object ID as well as tag ID information, such as target height, texture, shape and color, which can greatly enhance scene analysis. We extend geometry-based tracking so that intermittent information on ID and location can be used in determining a set of trajectories of N targets over T time steps. We show that partial-targetinformation obtained through RFID can reduce computation time (by 99.9% in some cases) and also increase the likelihood of producing correct trajectories. We conclude that real-time decision-making should be possible if the surveillance system can integrate information effectively between the sensor level and activity understanding level.

  4. Lagrangian 3D tracking of fluorescent microscopic objects in motion

    NASA Astrophysics Data System (ADS)

    Darnige, T.; Figueroa-Morales, N.; Bohec, P.; Lindner, A.; Clément, E.

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  5. Lagrangian 3D tracking of fluorescent microscopic objects in motion.

    PubMed

    Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  6. Combined magnetic resonance imaging and ultrasound echography guidance for motion compensated HIFU interventions

    NASA Astrophysics Data System (ADS)

    Ries, Mario; de Senneville, Baudouin Denis; Regard, Yvan; Moonen, Chrit

    2012-11-01

    The objective of this study is to evaluate the feasibility to integrate ultrasound echography as an additional imaging modality for continuous target tracking, while performing simultaneously real-time MR- thermometry to guide a High Intensity Focused Ultrasound (HIFU) ablation. Experiments on a moving phantom were performed with MRI-guided HIFU during continuous ultrasound echography. Real-time US echography-based target tracking during MR-guided HIFU heating was performed with heated area dimensions similar to those obtained for a static target. The combination of both imaging modalities shows great potential for real-time beam steering and MR-thermometry.

  7. Multiple object tracking using the shortest path faster association algorithm.

    PubMed

    Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  8. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    PubMed Central

    Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322

  9. Image Tracking for the High Similarity Drug Tablets Based on Light Intensity Reflective Energy and Artificial Neural Network

    PubMed Central

    Liang, Zhongwei; Zhou, Liang; Liu, Xiaochu; Wang, Xiaogang

    2014-01-01

    It is obvious that tablet image tracking exerts a notable influence on the efficiency and reliability of high-speed drug mass production, and, simultaneously, it also emerges as a big difficult problem and targeted focus during production monitoring in recent years, due to the high similarity shape and random position distribution of those objectives to be searched for. For the purpose of tracking tablets accurately in random distribution, through using surface fitting approach and transitional vector determination, the calibrated surface of light intensity reflective energy can be established, describing the shape topology and topography details of objective tablet. On this basis, the mathematical properties of these established surfaces have been proposed, and thereafter artificial neural network (ANN) has been employed for classifying those moving targeted tablets by recognizing their different surface properties; therefore, the instantaneous coordinate positions of those drug tablets on one image frame can then be determined. By repeating identical pattern recognition on the next image frame, the real-time movements of objective tablet templates were successfully tracked in sequence. This paper provides reliable references and new research ideas for the real-time objective tracking in the case of drug production practices. PMID:25143781

  10. Improved semi-supervised online boosting for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Yicui; Qi, Lin; Tan, Shukun

    2016-10-01

    The advantage of an online semi-supervised boosting method which takes object tracking problem as a classification problem, is training a binary classifier from labeled and unlabeled examples. Appropriate object features are selected based on real time changes in the object. However, the online semi-supervised boosting method faces one key problem: The traditional self-training using the classification results to update the classifier itself, often leads to drifting or tracking failure, due to the accumulated error during each update of the tracker. To overcome the disadvantages of semi-supervised online boosting based on object tracking methods, the contribution of this paper is an improved online semi-supervised boosting method, in which the learning process is guided by positive (P) and negative (N) constraints, termed P-N constraints, which restrict the labeling of the unlabeled samples. First, we train the classification by an online semi-supervised boosting. Then, this classification is used to process the next frame. Finally, the classification is analyzed by the P-N constraints, which are used to verify if the labels of unlabeled data assigned by the classifier are in line with the assumptions made about positive and negative samples. The proposed algorithm can effectively improve the discriminative ability of the classifier and significantly alleviate the drifting problem in tracking applications. In the experiments, we demonstrate real-time tracking of our tracker on several challenging test sequences where our tracker outperforms other related on-line tracking methods and achieves promising tracking performance.

  11. Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis

    PubMed Central

    Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan

    2015-01-01

    Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761

  12. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    PubMed

    Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  13. LivePhantom: Retrieving Virtual World Light Data to Real Environments

    PubMed Central

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663

  14. Autonomous Space Object Catalogue Construction and Upkeep Using Sensor Control Theory

    NASA Astrophysics Data System (ADS)

    Moretti, N.; Rutten, M.; Bessell, T.; Morreale, B.

    The capability to track objects in space is critical to safeguard domestic and international space assets. Infrequent measurement opportunities, complex dynamics and partial observability of orbital state makes the tracking of resident space objects nontrivial. It is not uncommon for human operators to intervene with space tracking systems, particularly in scheduling sensors. This paper details the development of a system that maintains a catalogue of geostationary objects through dynamically tasking sensors in real time by managing the uncertainty of object states. As the number of objects in space grows the potential for collision grows exponentially. Being able to provide accurate assessment to operators regarding costly collision avoidance manoeuvres is paramount; the accuracy of which is highly dependent on how object states are estimated. The system represents object state and uncertainty using particles and utilises a particle filter for state estimation. Particle filters capture the model and measurement uncertainty accurately, allowing for a more comprehensive representation of the state’s probability density function. Additionally, the number of objects in space is growing disproportionally to the number of sensors used to track them. Maintaining precise positions for all objects places large loads on sensors, limiting the time available to search for new objects or track high priority objects. Rather than precisely track all objects our system manages the uncertainty in orbital state for each object independently. The uncertainty is allowed to grow and sensor data is only requested when the uncertainty must be reduced. For example when object uncertainties overlap leading to data association issues or if the uncertainty grows to beyond a field of view. These control laws are formulated into a cost function, which is optimised in real time to task sensors. By controlling an optical telescope the system has been able to construct and maintain a catalogue of approximately 100 geostationary objects.

  15. A novel vehicle tracking algorithm based on mean shift and active contour model in complex environment

    NASA Astrophysics Data System (ADS)

    Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen

    2017-06-01

    Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.

  16. Real-time classification of vehicles by type within infrared imagery

    NASA Astrophysics Data System (ADS)

    Kundegorski, Mikolaj E.; Akçay, Samet; Payen de La Garanderie, Grégoire; Breckon, Toby P.

    2016-10-01

    Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.

  17. Evaluation of an image-based tracking workflow using a passive marker and resonant micro-coil fiducials for automatic image plane alignment in interventional MRI.

    PubMed

    Neumann, M; Breton, E; Cuvillon, L; Pan, L; Lorenz, C H; de Mathelin, M

    2012-01-01

    In this paper, an original workflow is presented for MR image plane alignment based on tracking in real-time MR images. A test device consisting of two resonant micro-coils and a passive marker is proposed for detection using image-based algorithms. Micro-coils allow for automated initialization of the object detection in dedicated low flip angle projection images; then the passive marker is tracked in clinical real-time MR images, with alternation between two oblique orthogonal image planes along the test device axis; in case the passive marker is lost in real-time images, the workflow is reinitialized. The proposed workflow was designed to minimize dedicated acquisition time to a single dedicated acquisition in the ideal case (no reinitialization required). First experiments have shown promising results for test-device tracking precision, with a mean position error of 0.79 mm and a mean orientation error of 0.24°.

  18. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  19. Multithreaded hybrid feature tracking for markerless augmented reality.

    PubMed

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  20. Real-Time Utilization of STSS for Improved Collision Risk Management

    NASA Astrophysics Data System (ADS)

    Duncan, M.; Fero, R.; Smith, T.; Southworth, J.; Wysack, J.

    2012-09-01

    Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. The increased dependence on our space assets has in turn led to a greater need for accurate, near real-time knowledge of all space activities. Key areas of SSA include improved tracking of smaller objects more frequently, determining the intent of non- corporative maneuvering spacecraft, identifying all potential high risk conjunction events, and leveraging non-traditional sensors in support of the SSA mission. As the size of the space object catalog grows, the demand for more tracking capacity increases. One solution is to exploit existing sensors that are primarily dedicated to other mission areas. This paper presents details regarding the utilization of the Missile Defense Agency's (MDA) space-based asset Space Tracking Surveillance System (STSS) for operational SSA. Shown are the steps and analysis items that were performed to prepare STSS for real-time utilization during high interest conjunction events. Included in our work is: 1. STSS debris tracking capability, 2. Orbit estimation/data fusion between STSS raw observations and JSpOC state data, and finally 3. Orbit geometry for MDA assets 4. Development of the STSS tasking ConOps Several operational examples are included.

  1. Self-motion impairs multiple-object tracking.

    PubMed

    Thomas, Laura E; Seiffert, Adriane E

    2010-10-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.

  2. The research on the mean shift algorithm for target tracking

    NASA Astrophysics Data System (ADS)

    CAO, Honghong

    2017-06-01

    The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.

  3. Nonlinear Motion Tracking by Deep Learning Architecture

    NASA Astrophysics Data System (ADS)

    Verma, Arnav; Samaiya, Devesh; Gupta, Karunesh K.

    2018-03-01

    In the world of Artificial Intelligence, object motion tracking is one of the major problems. The extensive research is being carried out to track people in crowd. This paper presents a unique technique for nonlinear motion tracking in the absence of prior knowledge of nature of nonlinear path that the object being tracked may follow. We achieve this by first obtaining the centroid of the object and then using the centroid as the current example for a recurrent neural network trained using real-time recurrent learning. We have tweaked the standard algorithm slightly and have accumulated the gradient for few previous iterations instead of using just the current iteration as is the norm. We show that for a single object, such a recurrent neural network is highly capable of approximating the nonlinearity of its path.

  4. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    NASA Astrophysics Data System (ADS)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  5. Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature

    PubMed Central

    Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat

    2014-01-01

    It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185

  6. Person detection and tracking with a 360° lidar system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2017-10-01

    Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.

  7. A digital video tracking system

    NASA Astrophysics Data System (ADS)

    Giles, M. K.

    1980-01-01

    The Real-Time Videotheodolite (RTV) was developed in connection with the requirement to replace film as a recording medium to obtain the real-time location of an object in the field-of-view (FOV) of a long focal length theodolite. Design philosophy called for a system capable of discriminatory judgment in identifying the object to be tracked with 60 independent observations per second, capable of locating the center of mass of the object projection on the image plane within about 2% of the FOV in rapidly changing background/foreground situations, and able to generate a predicted observation angle for the next observation. A description is given of a number of subsystems of the RTV, taking into account the processor configuration, the video processor, the projection processor, the tracker processor, the control processor, and the optics interface and imaging subsystem.

  8. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  9. Real-time edge tracking using a tactile sensor

    NASA Technical Reports Server (NTRS)

    Berger, Alan D.; Volpe, Richard; Khosla, Pradeep K.

    1989-01-01

    Object recognition through the use of input from multiple sensors is an important aspect of an autonomous manipulation system. In tactile object recognition, it is necessary to determine the location and orientation of object edges and surfaces. A controller is proposed that utilizes a tactile sensor in the feedback loop of a manipulator to track along edges. In the control system, the data from the tactile sensor is first processed to find edges. The parameters of these edges are then used to generate a control signal to a hybrid controller. Theory is presented for tactile edge detection and an edge tracking controller. In addition, experimental verification of the edge tracking controller is presented.

  10. Automated multiple target detection and tracking in UAV videos

    NASA Astrophysics Data System (ADS)

    Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie

    2010-04-01

    In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.

  11. Deterministic object tracking using Gaussian ringlet and directional edge features

    NASA Astrophysics Data System (ADS)

    Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.

    2017-10-01

    Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.

  12. Improvement in the workflow efficiency of treating non-emergency outpatients by using a WLAN-based real-time location system in a level I trauma center

    PubMed Central

    Stübig, Timo; Suero, Eduardo; Zeckey, Christian; Min, William; Janzen, Laura; Citak, Musa; Krettek, Christian; Hüfner, Tobias; Gaulke, Ralph

    2013-01-01

    Background Patient localization can improve workflow in outpatient settings, which might lead to lower costs. The existing wireless local area network (WLAN) architecture in many hospitals opens up the possibility of adopting real-time patient tracking systems for capturing and processing position data; once captured, these data can be linked with clinical patient data. Objective To analyze the effect of a WLAN-based real-time patient localization system for tracking outpatients in our level I trauma center. Methods Outpatients from April to August 2009 were included in the study, which was performed in two different stages. In phase I, patient tracking was performed with the real-time location system, but acquired data were not displayed to the personnel. In phase II tracking, the acquired data were automatically collected and displayed. Total treatment time was the primary outcome parameter. Statistical analysis was performed using multiple linear regression, with the significance level set at 0.05. Covariates included sex, age, type of encounter, prioritization, treatment team, number of residents, and radiographic imaging. Results/discussion 1045 patients were included in our study (540 in phase I and 505 in phase 2). An overall improvement of efficiency, as determined by a significantly decreased total treatment time (23.7%) from phase I to phase II, was noted. Additionally, significantly lower treatment times were noted for phase II patients even when other factors were considered (increased numbers of residents, the addition of imaging diagnostics, and comparison among various localization zones). Conclusions WLAN-based real-time patient localization systems can reduce process inefficiencies associated with manual patient identification and tracking. PMID:23676246

  13. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  14. Self-Motion Impairs Multiple-Object Tracking

    ERIC Educational Resources Information Center

    Thomas, Laura E.; Seiffert, Adriane E.

    2010-01-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement…

  15. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  16. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  17. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation.

    PubMed

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  18. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young-Keun, E-mail: ykkim@handong.edu; Kim, Kyung-Soo

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-basedmore » sensor, the system is expected to be highly robust to sea weather conditions.« less

  19. Real-time object tracking based on scale-invariant features employing bio-inspired hardware.

    PubMed

    Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya

    2016-09-01

    We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    NASA Astrophysics Data System (ADS)

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  1. Real time magnetic resonance guided endomyocardial local delivery

    PubMed Central

    Corti, R; Badimon, J; Mizsei, G; Macaluso, F; Lee, M; Licato, P; Viles-Gonzalez, J F; Fuster, V; Sherman, W

    2005-01-01

    Objective: To investigate the feasibility of targeting various areas of left ventricle myocardium under real time magnetic resonance (MR) imaging with a customised injection catheter equipped with a miniaturised coil. Design: A needle injection catheter with a mounted resonant solenoid circuit (coil) at its tip was designed and constructed. A 1.5 T MR scanner with customised real time sequence combined with in-room scan running capabilities was used. With this system, various myocardial areas within the left ventricle were targeted and injected with a gadolinium-diethylenetriaminepentaacetic acid (DTPA) and Indian ink mixture. Results: Real time sequencing at 10 frames/s allowed clear visualisation of the moving catheter and its transit through the aorta into the ventricle, as well as targeting of all ventricle wall segments without further image enhancement techniques. All injections were visualised by real time MR imaging and verified by gross pathology. Conclusion: The tracking device allowed real time in vivo visualisation of catheters in the aorta and left ventricle as well as precise targeting of myocardial areas. The use of this real time catheter tracking may enable precise and adequate delivery of agents for tissue regeneration. PMID:15710717

  2. Real-time particle tracking for studying intracellular trafficking of pharmaceutical nanocarriers.

    PubMed

    Huang, Feiran; Watson, Erin; Dempsey, Christopher; Suh, Junghae

    2013-01-01

    Real-time particle tracking is a technique that combines fluorescence microscopy with object tracking and computing and can be used to extract quantitative transport parameters for small particles inside cells. Since the success of a nanocarrier can often be determined by how effectively it delivers cargo to the target organelle, understanding the complex intracellular transport of pharmaceutical nanocarriers is critical. Real-time particle tracking provides insight into the dynamics of the intracellular behavior of nanoparticles, which may lead to significant improvements in the design and development of novel delivery systems. Unfortunately, this technique is not often fully understood, limiting its implementation by researchers in the field of nanomedicine. In this chapter, one of the most complicated aspects of particle tracking, the mean square displacement (MSD) calculation, is explained in a simple manner designed for the novice particle tracker. Pseudo code for performing the MSD calculation in MATLAB is also provided. This chapter contains clear and comprehensive instructions for a series of basic procedures in the technique of particle tracking. Instructions for performing confocal microscopy of nanoparticle samples are provided, and two methods of determining particle trajectories that do not require commercial particle-tracking software are provided. Trajectory analysis and determination of the tracking resolution are also explained. By providing comprehensive instructions needed to perform particle-tracking experiments, this chapter will enable researchers to gain new insight into the intracellular dynamics of nanocarriers, potentially leading to the development of more effective and intelligent therapeutic delivery vectors.

  3. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    PubMed Central

    Qin, Lei; Snoussi, Hichem; Abdallah, Fahed

    2014-01-01

    We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883

  4. New platform for evaluating ultrasound-guided interventional technologies

    NASA Astrophysics Data System (ADS)

    Kim, Younsu; Guo, Xiaoyu; Boctor, Emad M.

    2016-04-01

    Ultrasound-guided needle tracking systems are frequently used in surgical procedures. Various needle tracking technologies have been developed using ultrasound, electromagnetic sensors, and optical sensors. To evaluate these new needle tracking technologies, 3D volume information is often acquired to compute the actual distance from the needle tip to the target object. The image-guidance conditions for comparison are often inconsistent due to the ultrasound beam-thickness. Since 3D volumes are necessary, there is often some time delay between the surgical procedure and the evaluation. These evaluation methods will generally only measure the final needle location because they interrupt the surgical procedure. The main contribution of this work is a new platform for evaluating needle tracking systems in real-time, resolving the problems stated above. We developed new tools to evaluate the precise distance between the needle tip and the target object. A PZT element transmitting unit is designed as needle introducer shape so that it can be inserted in the needle. We have collected time of flight and amplitude information in real-time. We propose two systems to collect ultrasound signals. We demonstrate this platform on an ultrasound DAQ system and a cost-effective FPGA board. The results of a chicken breast experiment show the feasibility of tracking a time series of needle tip distances. We performed validation experiments with a plastisol phantom and have shown that the preliminary data fits a linear regression model with a RMSE of less than 0.6mm. Our platform can be applied to more general needle tracking methods using other forms of guidance.

  5. Real-time inspection by submarine images

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe

    1996-10-01

    A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.

  6. Analysis of real-time vibration data

    USGS Publications Warehouse

    Safak, E.

    2005-01-01

    In recent years, a few structures have been instrumented to provide continuous vibration data in real time, recording not only large-amplitude motions generated by extreme loads, but also small-amplitude motions generated by ambient loads. The main objective in continuous recording is to track any changes in structural characteristics, and to detect damage after an extreme event, such as an earthquake or explosion. The Fourier-based spectral analysis methods have been the primary tool to analyze vibration data from structures. In general, such methods do not work well for real-time data, because real-time data are mainly composed of ambient vibrations with very low amplitudes and signal-to-noise ratios. The long duration, linearity, and the stationarity of ambient data, however, allow us to utilize statistical signal processing tools, which can compensate for the adverse effects of low amplitudes and high noise. The analysis of real-time data requires tools and techniques that can be applied in real-time; i.e., data are processed and analyzed while being acquired. This paper presents some of the basic tools and techniques for processing and analyzing real-time vibration data. The topics discussed include utilization of running time windows, tracking mean and mean-square values, filtering, system identification, and damage detection.

  7. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  8. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  9. Track Detection in Railway Sidings Based on MEMS Gyroscope Sensors

    PubMed Central

    Broquetas, Antoni; Comerón, Adolf; Gelonch, Antoni; Fuertes, Josep M.; Castro, J. Antonio; Felip, Damià; López, Miguel A.; Pulido, José A.

    2012-01-01

    The paper presents a two-step technique for real-time track detection in single-track railway sidings using low-cost MEMS gyroscopes. The objective is to reliably know the path the train has taken in a switch, diverted or main road, immediately after the train head leaves the switch. The signal delivered by the gyroscope is first processed by an adaptive low-pass filter that rejects noise and converts the temporal turn rate data in degree/second units into spatial turn rate data in degree/meter. The conversion is based on the travelled distance taken from odometer data. The filter is implemented to achieve a speed-dependent cut-off frequency to maximize the signal-to-noise ratio. Although direct comparison of the filtered turn rate signal with a predetermined threshold is possible, the paper shows that better detection performance can be achieved by processing the turn rate signal with a filter matched to the rail switch curvature parameters. Implementation aspects of the track detector have been optimized for real-time operation. The detector has been tested with both simulated data and real data acquired in railway campaigns. PMID:23443376

  10. Ground-based real-time tracking and traverse recovery of China's first lunar rover

    NASA Astrophysics Data System (ADS)

    Zhou, Huan; Li, Haitao; Xu, Dezhen; Dong, Guangliang

    2016-02-01

    The Chang'E-3 unmanned lunar exploration mission forms an important stage in China's Lunar Exploration Program. China's first lunar rover "Yutu" is a sub-probe of the Chang'E-3 mission. Its main science objectives cover the investigations of the lunar soil and crust structure, explorations of mineral resources, and analyses of matter compositions. Some of these tasks require accurate real-time and continuous position tracking of the rover. To achieve these goals with the scale-limited Chinese observation network, this study proposed a ground-based real-time very long baseline interferometry phase referencing tracking method. We choose the Chang'E-3 lander as the phase reference source, and the accurate location of the rover is updated every 10 s using its radio-image sequences with the help of a priori information. The detailed movements of the Yutu rover have been captured with a sensitivity of several centimeters, and its traverse across the lunar surface during the first few days after its separation from the Chang'E-3 lander has been recovered. Comparisons and analysis show that the position tracking accuracy reaches a 1-m level.

  11. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    PubMed

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  12. The Schisto Track: A System for Gathering and Monitoring Epidemiological Surveys by Connecting Geographical Information Systems in Real Time

    PubMed Central

    2014-01-01

    Background Using the Android platform as a notification instrument for diseases and disorders forms a new alternative for computerization of epidemiological studies. Objective The objective of our study was to construct a tool for gathering epidemiological data on schistosomiasis using the Android platform. Methods The developed application (app), named the Schisto Track, is a tool for data capture and analysis that was designed to meet the needs of a traditional epidemiological survey. An initial version of the app was finished and tested in both real situations and simulations for epidemiological surveys. Results The app proved to be a tool capable of automation of activities, with data organization and standardization, easy data recovery (to enable interfacing with other systems), and totally modular architecture. Conclusions The proposed Schisto Track is in line with worldwide trends toward use of smartphones with the Android platform for modeling epidemiological scenarios. PMID:25099881

  13. Design of a 4-DOF MR haptic master for application to robot surgery: virtual environment work

    NASA Astrophysics Data System (ADS)

    Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok

    2014-09-01

    This paper presents the design and control performance of a novel type of 4-degrees-of-freedom (4-DOF) haptic master in cyberspace for a robot-assisted minimally invasive surgery (RMIS) application. By using a controllable magnetorheological (MR) fluid, the proposed haptic master can have a feedback function for a surgical robot. Due to the difficulty in utilizing real human organs in the experiment, the cyberspace that features the virtual object is constructed to evaluate the performance of the haptic master. In order to realize the cyberspace, a volumetric deformable object is represented by a shape-retaining chain-linked (S-chain) model, which is a fast volumetric model and is suitable for real-time applications. In the haptic architecture for an RMIS application, the desired torque and position induced from the virtual object of the cyberspace and the haptic master of real space are transferred to each other. In order to validate the superiority of the proposed master and volumetric model, a tracking control experiment is implemented with a nonhomogenous volumetric cubic object to demonstrate that the proposed model can be utilized in real-time haptic rendering architecture. A proportional-integral-derivative (PID) controller is then designed and empirically implemented to accomplish the desired torque trajectories. It has been verified from the experiment that tracking the control performance for torque trajectories from a virtual slave can be successfully achieved.

  14. Real-time detecting and tracking ball with OpenCV and Kinect

    NASA Astrophysics Data System (ADS)

    Osiecki, Tomasz; Jankowski, Stanislaw

    2016-09-01

    This paper presents a way to detect and track ball with using the OpenCV and Kinect. Object and people recognition, tracking are more and more popular topics nowadays. Described solution makes it possible to detect ball based on the range, which is set by the user and capture information about ball position in three dimensions. It can be store in the computer and use for example to display trajectory of the ball.

  15. Moving object detection and tracking in videos through turbulent medium

    NASA Astrophysics Data System (ADS)

    Halder, Kalyan Kumar; Tahtali, Murat; Anavatti, Sreenatha G.

    2016-06-01

    This paper addresses the problem of identifying and tracking moving objects in a video sequence having a time-varying background. This is a fundamental task in many computer vision applications, though a very challenging one because of turbulence that causes blurring and spatiotemporal movements of the background images. Our proposed approach involves two major steps. First, a moving object detection algorithm that deals with the detection of real motions by separating the turbulence-induced motions using a two-level thresholding technique is used. In the second step, a feature-based generalized regression neural network is applied to track the detected objects throughout the frames in the video sequence. The proposed approach uses the centroid and area features of the moving objects and creates the reference regions instantly by selecting the objects within a circle. Simulation experiments are carried out on several turbulence-degraded video sequences and comparisons with an earlier method confirms that the proposed approach provides a more effective tracking of the targets.

  16. Optical joint correlator for real-time image tracking and retinal surgery

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor)

    1991-01-01

    A method for tracking an object in a sequence of images is described. Such sequence of images may, for example, be a sequence of television frames. The object in the current frame is correlated with the object in the previous frame to obtain the relative location of the object in the two frames. An optical joint transform correlator apparatus is provided to carry out the process. Such joint transform correlator apparatus forms the basis for laser eye surgical apparatus where an image of the fundus of an eyeball is stabilized and forms the basis for the correlator apparatus to track the position of the eyeball caused by involuntary movement. With knowledge of the eyeball position, a surgical laser can be precisely pointed toward a position on the retina.

  17. Automated object detection and tracking with a flash LiDAR system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2016-10-01

    The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data distortion of most LiDAR systems. The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis, point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor. As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly volatile and rapid changes in the direction of motion the object is kept in the field of view. The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances (20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a boat or an UAV in various distances.

  18. Real-time ultrasound-tagging to track the 2D motion of the common carotid artery wall in vivo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahnd, Guillaume, E-mail: g.zahnd@erasmusmc.nl; Salles, Sébastien; Liebgott, Hervé

    2015-02-15

    Purpose: Tracking the motion of biological tissues represents an important issue in the field of medical ultrasound imaging. However, the longitudinal component of the motion (i.e., perpendicular to the beam axis) remains more challenging to extract due to the rather coarse resolution cell of ultrasound scanners along this direction. The aim of this study is to introduce a real-time beamforming strategy dedicated to acquire tagged images featuring a distinct pattern in the objective to ease the tracking. Methods: Under the conditions of the Fraunhofer approximation, a specific apodization function was applied to the received raw channel data, in real-time duringmore » image acquisition, in order to introduce a periodic oscillations pattern along the longitudinal direction of the radio frequency signal. Analytic signals were then extracted from the tagged images, and subpixel motion tracking of the intima–media complex was subsequently performed offline, by means of a previously introduced bidimensional analytic phase-based estimator. Results: The authors’ framework was applied in vivo on the common carotid artery from 20 young healthy volunteers and 6 elderly patients with high atherosclerosis risk. Cine-loops of tagged images were acquired during three cardiac cycles. Evaluated against reference trajectories manually generated by three experienced analysts, the mean absolute tracking error was 98 ± 84 μm and 55 ± 44 μm in the longitudinal and axial directions, respectively. These errors corresponded to 28% ± 23% and 13% ± 9% of the longitudinal and axial amplitude of the assessed motion, respectively. Conclusions: The proposed framework enables tagged ultrasound images of in vivo tissues to be acquired in real-time. Such unconventional beamforming strategy contributes to improve tracking accuracy and could potentially benefit to the interpretation and diagnosis of biomedical images.« less

  19. Lightning Jump Algorithm Development for the GOES·R Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    Schultz. E.; Schultz. C.; Chronis, T.; Stough, S.; Carey, L.; Calhoun, K.; Ortega, K.; Stano, G.; Cecil, D.; Bateman, M.; hide

    2014-01-01

    Current work on the lightning jump algorithm to be used in GOES-R Geostationary Lightning Mapper (GLM)'s data stream is multifaceted due to the intricate interplay between the storm tracking, GLM proxy data, and the performance of the lightning jump itself. This work outlines the progress of the last year, where analysis and performance of the lightning jump algorithm with automated storm tracking and GLM proxy data were assessed using over 700 storms from North Alabama. The cases analyzed coincide with previous semi-objective work performed using total lightning mapping array (LMA) measurements in Schultz et al. (2011). Analysis shows that key components of the algorithm (flash rate and sigma thresholds) have the greatest influence on the performance of the algorithm when validating using severe storm reports. Automated objective analysis using the GLM proxy data has shown probability of detection (POD) values around 60% with false alarm rates (FAR) around 73% using similar methodology to Schultz et al. (2011). However, when applying verification methods similar to those employed by the National Weather Service, POD values increase slightly (69%) and FAR values decrease (63%). The relationship between storm tracking and lightning jump has also been tested in a real-time framework at NSSL. This system includes fully automated tracking by radar alone, real-time LMA and radar observations and the lightning jump. Results indicate that the POD is strong at 65%. However, the FAR is significantly higher than in Schultz et al. (2011) (50-80% depending on various tracking/lightning jump parameters) when using storm reports for verification. Given known issues with Storm Data, the performance of the real-time jump algorithm is also being tested with high density radar and surface observations from the NSSL Severe Hazards Analysis & Verification Experiment (SHAVE).

  20. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    NASA Astrophysics Data System (ADS)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  1. Robust real-time horizon detection in full-motion video

    NASA Astrophysics Data System (ADS)

    Young, Grace B.; Bagnall, Bryan; Lane, Corey; Parameswaran, Shibin

    2014-06-01

    The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.

  2. Detection and tracking of drones using advanced acoustic cameras

    NASA Astrophysics Data System (ADS)

    Busset, Joël.; Perrodin, Florian; Wellig, Peter; Ott, Beat; Heutschi, Kurt; Rühl, Torben; Nussbaumer, Thomas

    2015-10-01

    Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.

  3. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  4. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  5. A coarse-to-fine kernel matching approach for mean-shift based visual tracking

    NASA Astrophysics Data System (ADS)

    Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.

    2009-03-01

    Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.

  6. Particle tracking and extended object imaging by interferometric super resolution microscopy

    NASA Astrophysics Data System (ADS)

    Gdor, Itay; Yoo, Seunghwan; Wang, Xiaolei; Daddysman, Matthew; Wilton, Rosemarie; Ferrier, Nicola; Hereld, Mark; Cossairt, Oliver (Ollie); Katsaggelos, Aggelos; Scherer, Norbert F.

    2018-02-01

    An interferometric fluorescent microscope and a novel theoretic image reconstruction approach were developed and used to obtain super-resolution images of live biological samples and to enable dynamic real time tracking. The tracking utilizes the information stored in the interference pattern of both the illuminating incoherent light and the emitted light. By periodically shifting the interferometer phase and a phase retrieval algorithm we obtain information that allow localization with sub-2 nm axial resolution at 5 Hz.

  7. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  8. Detection of dominant flow and abnormal events in surveillance video

    NASA Astrophysics Data System (ADS)

    Kwak, Sooyeong; Byun, Hyeran

    2011-02-01

    We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.

  9. Real-Time Visual Tracking through Fusion Features

    PubMed Central

    Ruan, Yang; Wei, Zhenzhong

    2016-01-01

    Due to their high-speed, correlation filters for object tracking have begun to receive increasing attention. Traditional object trackers based on correlation filters typically use a single type of feature. In this paper, we attempt to integrate multiple feature types to improve the performance, and we propose a new DD-HOG fusion feature that consists of discriminative descriptors (DDs) and histograms of oriented gradients (HOG). However, fusion features as multi-vector descriptors cannot be directly used in prior correlation filters. To overcome this difficulty, we propose a multi-vector correlation filter (MVCF) that can directly convolve with a multi-vector descriptor to obtain a single-channel response that indicates the location of an object. Experiments on the CVPR2013 tracking benchmark with the evaluation of state-of-the-art trackers show the effectiveness and speed of the proposed method. Moreover, we show that our MVCF tracker, which uses the DD-HOG descriptor, outperforms the structure-preserving object tracker (SPOT) in multi-object tracking because of its high-speed and ability to address heavy occlusion. PMID:27347951

  10. A Non-invasive Real-time Localization System for Enhanced Efficacy in Nasogastric Intubation.

    PubMed

    Sun, Zhenglong; Foong, Shaohui; Maréchal, Luc; Tan, U-Xuan; Teo, Tee Hui; Shabbir, Asim

    2015-12-01

    Nasogastric (NG) intubation is one of the most commonly performed clinical procedures. Real-time localization and tracking of the NG tube passage at the larynx region into the esophagus is crucial for safety, but is lacking in current practice. In this paper, we present the design, analysis and evaluation of a non-invasive real-time localization system using passive magnetic tracking techniques to improve efficacy of the clinical NG intubation process. By embedding a small permanent magnet at the insertion tip of the NG tube, a wearable system containing embedded sensors around the neck can determine the absolute position of the NG tube inside the body in real-time to assist in insertion. In order to validate the feasibility of the proposed system in detecting erroneous tube placement, typical reference intubation trajectories are first analyzed using anatomically correct models and localization accuracy of the system are evaluated using a precise robotic platform. It is found that the root-mean-squared tracking accuracy is within 5.3 mm for both the esophagus and trachea intubation pathways. Experiments were also designed and performed to demonstrate that the system is capable of tracking the NG tube accurately in biological environments even in presence of stationary ferromagnetic objects (such as clinical instruments). With minimal physical modification to the NG tube and clinical process, this system allows accurate and efficient localization and confirmation of correct NG tube placement without supplemental radiographic methods which is considered the current clinical standard.

  11. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy.

    PubMed

    Menten, Martin J; Fast, Martin F; Nill, Simeon; Oelfke, Uwe

    2015-12-01

    Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated by weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Regular dual-energy imaging was able to increase tracking accuracy in left-right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. This study has highlighted the influence of patient anatomy on the success rate of real-time markerless tumor tracking using dual-energy imaging. Additionally, the importance of the spectral separation of the imaging beams used to generate the dual-energy images has been shown.

  12. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menten, Martin J., E-mail: martin.menten@icr.ac.uk; Fast, Martin F.; Nill, Simeon

    2015-12-15

    Purpose: Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. Methods: kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated bymore » weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Results: Regular dual-energy imaging was able to increase tracking accuracy in left–right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. Conclusions: This study has highlighted the influence of patient anatomy on the success rate of real-time markerless tumor tracking using dual-energy imaging. Additionally, the importance of the spectral separation of the imaging beams used to generate the dual-energy images has been shown.« less

  13. [A review of progress of real-time tumor tracking radiotherapy technology based on dynamic multi-leaf collimator].

    PubMed

    Liu, Fubo; Li, Guangjun; Shen, Jiuling; Li, Ligin; Bai, Sen

    2017-02-01

    While radiation treatment to patients with tumors in thorax and abdomen is being performed, further improvement of radiation accuracy is restricted by the tumor intra-fractional motion due to respiration. Real-time tumor tracking radiation is an optimal solution to tumor intra-fractional motion. A review of the progress of real-time dynamic multi-leaf collimator(DMLC) tracking is provided in the present review, including DMLC tracking method, time lag of DMLC tracking system, and dosimetric verification.

  14. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  15. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  16. Robust Arm and Hand Tracking by Unsupervised Context Learning

    PubMed Central

    Spruyt, Vincent; Ledda, Alessandro; Philips, Wilfried

    2014-01-01

    Hand tracking in video is an increasingly popular research field due to the rise of novel human-computer interaction methods. However, robust and real-time hand tracking in unconstrained environments remains a challenging task due to the high number of degrees of freedom and the non-rigid character of the human hand. In this paper, we propose an unsupervised method to automatically learn the context in which a hand is embedded. This context includes the arm and any other object that coherently moves along with the hand. We introduce two novel methods to incorporate this context information into a probabilistic tracking framework, and introduce a simple yet effective solution to estimate the position of the arm. Finally, we show that our method greatly increases robustness against occlusion and cluttered background, without degrading tracking performance if no contextual information is available. The proposed real-time algorithm is shown to outperform the current state-of-the-art by evaluating it on three publicly available video datasets. Furthermore, a novel dataset is created and made publicly available for the research community. PMID:25004155

  17. Robust feedback zoom tracking for digital video surveillance.

    PubMed

    Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong

    2012-01-01

    Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance.

  18. Real time tracking by LOPF algorithm with mixture model

    NASA Astrophysics Data System (ADS)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  19. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  20. Compressed normalized block difference for object tracking

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  1. Nonstationary EO/IR Clutter Suppression and Dim Object Tracking

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Brown, A.; Brown, J.

    2010-09-01

    We develop and evaluate the performance of advanced algorithms which provide significantly improved capabilities for automated detection and tracking of ballistic and flying dim objects in the presence of highly structured intense clutter. Applications include ballistic missile early warning, midcourse tracking, trajectory prediction, and resident space object detection and tracking. The set of algorithms include, in particular, adaptive spatiotemporal clutter estimation-suppression and nonlinear filtering-based multiple-object track-before-detect. These algorithms are suitable for integration into geostationary, highly elliptical, or low earth orbit scanning or staring sensor suites, and are based on data-driven processing that adapts to real-world clutter backgrounds, including celestial, earth limb, or terrestrial clutter. In many scenarios of interest, e.g., for highly elliptic and, especially, low earth orbits, the resulting clutter is highly nonstationary, providing a significant challenge for clutter suppression to or below sensor noise levels, which is essential for dim object detection and tracking. We demonstrate the success of the developed algorithms using semi-synthetic and real data. In particular, our algorithms are shown to be capable of detecting and tracking point objects with signal-to-clutter levels down to 1/1000 and signal-to-noise levels down to 1/4.

  2. Real Time Target Tracking in a Phantom Using Ultrasonic Imaging

    NASA Astrophysics Data System (ADS)

    Xiao, X.; Corner, G.; Huang, Z.

    In this paper we present a real-time ultrasound image guidance method suitable for tracking the motion of tumors. A 2D ultrasound based motion tracking system was evaluated. A robot was used to control the focused ultrasound and position it at the target that has been segmented from a real-time ultrasound video. Tracking accuracy and precision were investigated using a lesion mimicking phantom. Experiments have been conducted and results show sufficient efficiency of the image guidance algorithm. This work could be developed as the foundation for combining the real time ultrasound imaging tracking and MRI thermometry monitoring non-invasive surgery.

  3. Visual tracking of da Vinci instruments for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2014-03-01

    Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.

  4. An Incentive-based Online Optimization Framework for Distribution Grids

    DOE PAGES

    Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun; ...

    2017-10-09

    This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  5. An Incentive-based Online Optimization Framework for Distribution Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Xinyang; Dall'Anese, Emiliano; Chen, Lijun

    This article formulates a time-varying social-welfare maximization problem for distribution grids with distributed energy resources (DERs) and develops online distributed algorithms to identify (and track) its solutions. In the considered setting, network operator and DER-owners pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. The proposed algorithm affords an online implementation to enable tracking of the solutions in the presence of time-varying operational conditions and changing optimization objectives. It involves a strategy where the network operator collects voltage measurements throughout the feeder to build incentive signals for the DER-owners in real time; DERs thenmore » adjust the generated/consumed powers in order to avoid the violation of the voltage constraints while maximizing given objectives. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  6. RAPTOR-scan: Identifying and Tracking Objects Through Thousands of Sky Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidoff, Sherri; Wozniak, Przemyslaw

    2004-09-28

    The RAPTOR-scan system mines data for optical transients associated with gamma-ray bursts and is used to create a catalog for the RAPTOR telescope system. RAPTOR-scan can detect and track individual astronomical objects across data sets containing millions of observed points.Accurately identifying a real object over many optical images (clustering the individual appearances) is necessary in order to analyze object light curves. To achieve this, RAPTOR telescope observations are sent in real time to a database. Each morning, a program based on the DBSCAN algorithm clusters the observations and labels each one with an object identifier. Once clustering is complete, themore » analysis program may be used to query the database and produce light curves, maps of the sky field, or other informative displays.Although RAPTOR-scan was designed for the RAPTOR optical telescope system, it is a general tool designed to identify objects in a collection of astronomical data and facilitate quick data analysis. RAPTOR-scan will be released as free software under the GNU General Public License.« less

  7. Multi-Sensor Information Integration and Automatic Understanding

    DTIC Science & Technology

    2008-11-01

    also produced a real-time implementation of the tracking and anomalous behavior detection system that runs on real- world data – either using real-time...surveillance and airborne IED detection . 15. SUBJECT TERMS Multi-hypothesis tracking , particle filters, anomalous behavior detection , Bayesian...analyst to support decision making with large data sets. A key feature of the real-time tracking and behavior detection system developed is that the

  8. Weighted feature selection criteria for visual servoing of a telerobot

    NASA Technical Reports Server (NTRS)

    Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.

    1989-01-01

    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.

  9. Intraoperative visualization and assessment of electromagnetic tracking error

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Ungi, Tamas; Lasso, Andras; MacDonald, Andrew; Nanji, Sulaiman; Fichtinger, Gabor

    2015-03-01

    Electromagnetic tracking allows for increased flexibility in designing image-guided interventions, however it is well understood that electromagnetic tracking is prone to error. Visualization and assessment of the tracking error should take place in the operating room with minimal interference with the clinical procedure. The goal was to achieve this ideal in an open-source software implementation in a plug and play manner, without requiring programming from the user. We use optical tracking as a ground truth. An electromagnetic sensor and optical markers are mounted onto a stylus device, pivot calibrated for both trackers. Electromagnetic tracking error is defined as difference of tool tip position between electromagnetic and optical readings. Multiple measurements are interpolated into the thin-plate B-spline transform visualized in real time using 3D Slicer. All tracked devices are used in a plug and play manner through the open-source SlicerIGT and PLUS extensions of the 3D Slicer platform. Tracking error was measured multiple times to assess reproducibility of the method, both with and without placing ferromagnetic objects in the workspace. Results from exhaustive grid sampling and freehand sampling were similar, indicating that a quick freehand sampling is sufficient to detect unexpected or excessive field distortion in the operating room. The software is available as a plug-in for the 3D Slicer platforms. Results demonstrate potential for visualizing electromagnetic tracking error in real time for intraoperative environments in feasibility clinical trials in image-guided interventions.

  10. Recent advances in the development and transfer of machine vision technologies for space

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  11. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  12. Real-time tracking of objects for a KC-135 microgravity experiment

    NASA Technical Reports Server (NTRS)

    Littlefield, Mark L.

    1994-01-01

    The design of a visual tracking system for use on the Extra-Vehicular Activity Helper/Retriever (EVAHR) is discussed. EVAHR is an autonomous robot designed to perform numerous tasks in an orbital microgravity environment. Since the ability to grasp a freely translating and rotating object is vital to the robot's mission, the EVAHR must analyze range image generated by the primary sensor. This allows EVAHR to locate and focus its sensors so that an accurate set of object poses can be determined and a grasp strategy planned. To test the visual tracking system being developed, a mathematical simulation was used to model the space station environment and maintain dynamics on the EVAHR and any other free floating objects. A second phase of the investigation consists of a series of experiments carried out aboard a KC-135 aircraft flying a parabolic trajectory to simulate microgravity.

  13. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  14. 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.

    PubMed

    Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D

    2017-12-01

    The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.

  15. A Framework of Simple Event Detection in Surveillance Video

    NASA Astrophysics Data System (ADS)

    Xu, Weiguang; Zhang, Yafei; Lu, Jianjiang; Tian, Yulong; Wang, Jiabao

    Video surveillance is playing more and more important role in people's social life. Real-time alerting of threaten events and searching interesting content in stored large scale video footage needs human operator to pay full attention on monitor for long time. The labor intensive mode has limit the effectiveness and efficiency of the system. A framework of simple event detection is presented advance the automation of video surveillance. An improved inner key point matching approach is used to compensate motion of background in real-time; frame difference are used to detect foreground; HOG based classifiers are used to classify foreground object into people and car; mean-shift is used to tracking the recognized objects. Events are detected based on predefined rules. The maturity of the algorithms guarantee the robustness of the framework, and the improved approach and the easily checked rules enable the framework to work in real-time. Future works to be done are also discussed.

  16. Synthetic Foveal Imaging Technology

    NASA Technical Reports Server (NTRS)

    Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh

    2009-01-01

    Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.

  17. Foliage penetration by using 4-D point cloud data

    NASA Astrophysics Data System (ADS)

    Méndez Rodríguez, Javier; Sánchez-Reyes, Pedro J.; Cruz-Rivera, Sol M.

    2012-06-01

    Real-time awareness and rapid target detection are critical for the success of military missions. New technologies capable of detecting targets concealed in forest areas are needed in order to track and identify possible threats. Currently, LAser Detection And Ranging (LADAR) systems are capable of detecting obscured targets; however, tracking capabilities are severely limited. Now, a new LADAR-derived technology is under development to generate 4-D datasets (3-D video in a point cloud format). As such, there is a new need for algorithms that are able to process data in real time. We propose an algorithm capable of removing vegetation and other objects that may obfuscate concealed targets in a real 3-D environment. The algorithm is based on wavelets and can be used as a pre-processing step in a target recognition algorithm. Applications of the algorithm in a real-time 3-D system could help make pilots aware of high risk hidden targets such as tanks and weapons, among others. We will be using a 4-D simulated point cloud data to demonstrate the capabilities of our algorithm.

  18. Tracking-Learning-Detection.

    PubMed

    Kalal, Zdenek; Mikolajczyk, Krystian; Matas, Jiri

    2012-07-01

    This paper investigates long-term tracking of unknown objects in a video stream. The object is defined by its location and extent in a single frame. In every frame that follows, the task is to determine the object's location and extent or indicate that the object is not present. We propose a novel tracking framework (TLD) that explicitly decomposes the long-term tracking task into tracking, learning, and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary. The learning estimates the detector's errors and updates it to avoid these errors in the future. We study how to identify the detector's errors and learn from them. We develop a novel learning method (P-N learning) which estimates the errors by a pair of "experts": (1) P-expert estimates missed detections, and (2) N-expert estimates false alarms. The learning process is modeled as a discrete dynamical system and the conditions under which the learning guarantees improvement are found. We describe our real-time implementation of the TLD framework and the P-N learning. We carry out an extensive quantitative evaluation which shows a significant improvement over state-of-the-art approaches.

  19. Learned filters for object detection in multi-object visual tracking

    NASA Astrophysics Data System (ADS)

    Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David

    2016-05-01

    We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.

  20. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.

  1. SU-G-BRA-17: Tracking Multiple Targets with Independent Motion in Real-Time Using a Multi-Leaf Collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Y; Keall, P; Poulsen, P

    Purpose: Multiple targets with large intrafraction independent motion are often involved in advanced prostate, lung, abdominal, and head and neck cancer radiotherapy. Current standard of care treats these with the originally planned fields, jeopardizing the treatment outcomes. A real-time multi-leaf collimator (MLC) tracking method has been developed to address this problem for the first time. This study evaluates the geometric uncertainty of the multi-target tracking method. Methods: Four treatment scenarios are simulated based on a prostate IMAT plan to treat a moving prostate target and static pelvic node target: 1) real-time multi-target MLC tracking; 2) real-time prostate-only MLC tracking; 3)more » correcting for prostate interfraction motion at setup only; and 4) no motion correction. The geometric uncertainty of the treatment is assessed by the sum of the erroneously underexposed target area and overexposed healthy tissue areas for each individual target. Two patient-measured prostate trajectories of average 2 and 5 mm motion magnitude are used for simulations. Results: Real-time multi-target tracking accumulates the least uncertainty overall. As expected, it covers the static nodes similarly well as no motion correction treatment and covers the moving prostate similarly well as the real-time prostate-only tracking. Multi-target tracking reduces >90% of uncertainty for the static nodal target compared to the real-time prostate-only tracking or interfraction motion correction. For prostate target, depending on the motion trajectory which affects the uncertainty due to leaf-fitting, multi-target tracking may or may not perform better than correcting for interfraction prostate motion by shifting patient at setup, but it reduces ∼50% of uncertainty compared to no motion correction. Conclusion: The developed real-time multi-target MLC tracking can adapt for the independently moving targets better than other available treatment adaptations. This will enable PTV margin reduction to minimize health tissue toxicity while remain tumor coverage when treating advanced disease with independently moving targets involved. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship and NHMRC Project Grant No. APP1042375.« less

  2. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  3. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  4. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  5. Robust Feedback Zoom Tracking for Digital Video Surveillance

    PubMed Central

    Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong

    2012-01-01

    Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called “trace curve”, which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance. PMID:22969388

  6. Multi-Complementary Model for Long-Term Tracking

    PubMed Central

    Zhang, Deng; Zhang, Junchang; Xia, Chenyang

    2018-01-01

    In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170

  7. Electrical localization of weakly electric fish using neural networks

    NASA Astrophysics Data System (ADS)

    Kiar, Greg; Mamatjan, Yasin; Jun, James; Maler, Len; Adler, Andy

    2013-04-01

    Weakly Electric Fish (WEF) emit an Electric Organ Discharge (EOD), which travels through the surrounding water and enables WEF to locate nearby objects or to communicate between individuals. Previous tracking of WEF has been conducted using infrared (IR) cameras and subsequent image processing. The limitation of visual tracking is its relatively low frame-rate and lack of reliability when visually obstructed. Thus, there is a need for reliable monitoring of WEF location and behaviour. The objective of this study is to provide an alternative and non-invasive means of tracking WEF in real-time using neural networks (NN). This study was carried out in three stages. First stage was to recreate voltage distributions by simulating the WEF using EIDORS and finite element method (FEM) modelling. Second stage was to validate the model using phantom data acquired from an Electrical Impedance Tomography (EIT) based system, including a phantom fish and tank. In the third stage, the measurement data was acquired using a restrained WEF within a tank. We trained the NN based on the voltage distributions for different locations of the WEF. With networks trained on the acquired data, we tracked new locations of the WEF and observed the movement patterns. The results showed a strong correlation between expected and calculated values of WEF position in one dimension, yielding a high spatial resolution within 1 cm and 10 times higher temporal resolution than IR cameras. Thus, the developed approach could be used as a practical method to non-invasively monitor the WEF in real-time.

  8. Robotic Observatory System Design-Specification Considerations for Achieving Long-Term Sustainable Precision Performance

    NASA Astrophysics Data System (ADS)

    Wray, J. D.

    2003-05-01

    The robotic observatory telescope must point precisely on the target object, and then track autonomously to a fraction of the FWHM of the system PSF for durations of ten to twenty minutes or more. It must retain this precision while continuing to function at rates approaching thousands of observations per night for all its years of useful life. These stringent requirements raise new challenges unique to robotic telescope systems design. Critical design considerations are driven by the applicability of the above requirements to all systems of the robotic observatory, including telescope and instrument systems, telescope-dome enclosure systems, combined electrical and electronics systems, environmental (e.g. seeing) control systems and integrated computer control software systems. Traditional telescope design considerations include the effects of differential thermal strain, elastic flexure, plastic flexure and slack or backlash with respect to focal stability, optical alignment and angular pointing and tracking precision. Robotic observatory design must holistically encapsulate these traditional considerations within the overall objective of maximized long-term sustainable precision performance. This overall objective is accomplished through combining appropriate mechanical and dynamical system characteristics with a full-time real-time telescope mount model feedback computer control system. Important design considerations include: identifying and reducing quasi-zero-backlash; increasing size to increase precision; directly encoding axis shaft rotation; pointing and tracking operation via real-time feedback between precision mount model and axis mounted encoders; use of monolithic construction whenever appropriate for sustainable mechanical integrity; accelerating dome motion to eliminate repetitive shock; ducting internal telescope air to outside dome; and the principal design criteria: maximizing elastic repeatability while minimizing slack, plastic deformation and hysteresis to facilitate long-term repeatably precise pointing and tracking performance.

  9. Real-time video analysis for retail stores

    NASA Astrophysics Data System (ADS)

    Hassan, Ehtesham; Maurya, Avinash K.

    2015-03-01

    With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.

  10. Research on target tracking algorithm based on spatio-temporal context

    NASA Astrophysics Data System (ADS)

    Li, Baiping; Xu, Sanmei; Kang, Hongjuan

    2017-07-01

    In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.

  11. Multi-resolution model-based traffic sign detection and tracking

    NASA Astrophysics Data System (ADS)

    Marinas, Javier; Salgado, Luis; Camplani, Massimo

    2012-06-01

    In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.

  12. Real-time target tracking and locating system for UAV

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Tang, Linbo; Fu, Huiquan; Li, Maowen

    2017-07-01

    In order to achieve real-time target tracking and locating for UAV, a reliable processing system is built on the embedded platform. Firstly, the video image is acquired in real time by the photovoltaic system on the UAV. When the target information is known, KCF tracking algorithm is adopted to track the target. Then, the servo is controlled to rotate with the target, when the target is in the center of the image, the laser ranging module is opened to obtain the distance between the UAV and the target. Finally, to combine with UAV flight parameters obtained by BeiDou navigation system, through the target location algorithm to calculate the geodetic coordinates of the target. The results show that the system is stable for real-time tracking of targets and positioning.

  13. Real-time networked control of an industrial robot manipulator via discrete-time second-order sliding modes

    NASA Astrophysics Data System (ADS)

    Massimiliano Capisani, Luca; Facchinetti, Tullio; Ferrara, Antonella

    2010-08-01

    This article presents the networked control of a robotic anthropomorphic manipulator based on a second-order sliding mode technique, where the control objective is to track a desired trajectory for the manipulator. The adopted control scheme allows an easy and effective distribution of the control algorithm over two networked machines. While the predictability of real-time tasks execution is achieved by the Soft Hard Real-Time Kernel (S.Ha.R.K.) real-time operating system, the communication is established via a standard Ethernet network. The performances of the control system are evaluated under different experimental system configurations using, to perform the experiments, a COMAU SMART3-S2 industrial robot, and the results are analysed to put into evidence the robustness of the proposed approach against possible network delays, packet losses and unmodelled effects.

  14. A low-cost mobile adaptive tracking system for chronic pulmonary patients in home environment.

    PubMed

    Işik, Ali Hakan; Güler, Inan; Sener, Melahat Uzel

    2013-01-01

    The main objective of this study is presenting a real-time mobile adaptive tracking system for patients diagnosed with diseases such as asthma or chronic obstructive pulmonary disease and application results at home. The main role of the system is to support and track chronic pulmonary patients in real time who are comfortable in their home environment. It is not intended to replace the doctor, regular treatment, and diagnosis. In this study, the Java 2 micro edition-based system is integrated with portable spirometry, smartphone, extensible markup language-based Web services, Web server, and Web pages for visualizing pulmonary function test results. The Bluetooth(®) (Bluetooth SIG, Kirkland, WA) virtual serial port protocol is used to obtain the test results from spirometry. General packet radio service, wireless local area network, or third-generation-based wireless networks are used to send the test results from a smartphone to the remote database. The system provides real-time classification of test results with the back propagation artificial neural network algorithm on a mobile smartphone. It also provides the generation of appropriate short message service-based notification and sending of all data to the Web server. In this study, the test results of 486 patients, obtained from Atatürk Chest Diseases and Thoracic Surgery Training and Research Hospital in Ankara, Turkey, are used as the training and test set in the algorithm. The algorithm has 98.7% accuracy, 97.83% specificity, 97.63% sensitivity, and 0.946 correlation values. The results show that the system is cheap (900 Euros) and reliable. The developed real-time system provides improvement in classification accuracy and facilitates tracking of chronic pulmonary patients.

  15. Probing the benefits of real-time tracking during cancer care

    PubMed Central

    Patel, Rupa A.; Klasnja, Predrag; Hartzler, Andrea; Unruh, Kenton T.; Pratt, Wanda

    2012-01-01

    People with cancer experience many unanticipated symptoms and struggle to communicate them to clinicians. Although researchers have developed patient-reported outcome (PRO) tools to address this problem, such tools capture retrospective data intended for clinicians to review. In contrast, real-time tracking tools with visible results for patients could improve health outcomes and communication with clinicians, while also enhancing patients’ symptom management. To understand potential benefits of such tools, we studied the tracking behaviors of 25 women with breast cancer. We provided 10 of these participants with a real-time tracking tool that served as a “technology probe” to uncover behaviors and benefits from voluntary use. Our findings showed that while patients’ tracking behaviors without a tool were fragmented and sporadic, these behaviors with a tool were more consistent. Participants also used tracked data to see patterns among symptoms, feel psychosocial comfort, and improve symptom communication with clinicians. We conclude with design implications for future real-time tracking tools. PMID:23304413

  16. Configuration management issues and objectives for a real-time research flight test support facility

    NASA Technical Reports Server (NTRS)

    Yergensen, Stephen; Rhea, Donald C.

    1988-01-01

    An account is given of configuration management activities for the Western Aeronautical Test Range (WATR) at NASA-Ames, whose primary function is the conduct of aeronautical research flight testing through real-time processing and display, tracking, and communications systems. The processing of WATR configuration change requests for specific research flight test projects must be conducted in such a way as to refrain from compromising the reliability of WATR support to all project users. Configuration management's scope ranges from mission planning to operations monitoring and performance trend analysis.

  17. Real-time tracking of respiratory-induced tumor motion by dose-rate regulation

    NASA Astrophysics Data System (ADS)

    Han-Oh, Yeonju Sarah

    We have developed a novel real-time tumor-tracking technology, called Dose-Rate-Regulated Tracking (DRRT), to compensate for tumor motion caused by breathing. Unlike other previously proposed tumor-tracking methods, this new method uses a preprogrammed dynamic multileaf collimator (MLC) sequence in combination with real-time dose-rate control. This new scheme circumvents the technical challenge in MLC-based tumor tracking, that is to control the MLC motion in real time, based on real-time detected tumor motion. The preprogrammed MLC sequence describes the movement of the tumor, as a function of breathing phase, amplitude, or tidal volume. The irregularity of tumor motion during treatment is handled by real-time regulation of the dose rate, which effectively speeds up or slows down the delivery of radiation as needed. This method is based on the fact that all of the parameters in dynamic radiation delivery, including MLC motion, are enslaved to the cumulative dose, which, in turn, can be accelerated or decelerated by varying the dose rate. Because commercially available MLC systems do not allow the MLC delivery sequence to be modified in real time based on the patient's breathing signal, previously proposed tumor-tracking techniques using a MLC cannot be readily implemented in the clinic today. By using a preprogrammed MLC sequence to handle the required motion, the task for real-time control is greatly simplified. We have developed and tested the pre- programmed MLC sequence and the dose-rate regulation algorithm using lung-cancer patients breathing signals. It has been shown that DRRT can track the tumor with an accuracy of less than 2 mm for a latency of the DRRT system of less than 0.35 s. We also have evaluated the usefulness of guided breathing for DRRT. Since DRRT by its very nature can compensate for breathing-period changes, guided breathing was shown to be unnecessary for real-time tracking when using DRRT. Finally, DRRT uses the existing dose-rate control system that is provided for current linear accelerators. Therefore, DRRT can be achieved with minimal modification of existing technology, and this can shorten substantially the time necessary to establish DRRT in clinical practice.

  18. Development and Evaluation of Real-Time Volumetric Compton Gamma-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Barnowski, Ross Wegner

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. The real-time tracking allows the imager to be moved throughout the environment or around a particular object of interest, obtaining the multiple perspectives necessary for standoff 3D imaging. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, can be incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and two different mobile gamma-ray imaging platforms. The first is a cart-based imaging platform known as the Volumetric Compton Imager (VCI), comprising two 3D position-sensitive high purity germanium (HPGe) detectors, exhibiting excellent gamma-ray imaging characteristics, but with limited mobility due to the size and weight of the cart. The second system is the High Efficiency Multimodal Imager (HEMI) a hand-portable gamma-ray imager comprising 96 individual cm3 CdZnTe crystals arranged in a two-plane, active-mask configuration. The HEMI instrument has poorer energy and angular resolution than the VCI, but is truly hand-portable, allowing the SDF concept to be tested in multiple environments and for more challenging imaging scenarios. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. Each of the two mobile imaging systems are used to demonstrate SDF for a variety of scenarios, including general search and mapping scenarios with several point gamma-ray sources over the range of energies relevant for Compton imaging. More specific imaging scenarios are also addressed, including directed search and object interrogation scenarios. Finally, the volumetric image quality is quantitatively investigated with respect to the number of Compton events acquired during a measurement, the list-mode uncertainty of the Compton cone data, and the uncertainty in the pose estimate from the real-time tracking algorithm. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractability of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  19. Annotation of UAV surveillance video

    NASA Astrophysics Data System (ADS)

    Howlett, Todd; Robertson, Mark A.; Manthey, Dan; Krol, John

    2004-08-01

    Significant progress toward the development of a video annotation capability is presented in this paper. Research and development of an object tracking algorithm applicable for UAV video is described. Object tracking is necessary for attaching the annotations to the objects of interest. A methodology and format is defined for encoding video annotations using the SMPTE Key-Length-Value encoding standard. This provides the following benefits: a non-destructive annotation, compliance with existing standards, video playback in systems that are not annotation enabled and support for a real-time implementation. A model real-time video annotation system is also presented, at a high level, using the MPEG-2 Transport Stream as the transmission medium. This work was accomplished to meet the Department of Defense"s (DoD"s) need for a video annotation capability. Current practices for creating annotated products are to capture a still image frame, annotate it using an Electric Light Table application, and then pass the annotated image on as a product. That is not adequate for reporting or downstream cueing. It is too slow and there is a severe loss of information. This paper describes a capability for annotating directly on the video.

  20. Large holographic displays for real-time applications

    NASA Astrophysics Data System (ADS)

    Schwerdtner, A.; Häussler, R.; Leister, N.

    2008-02-01

    Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.

  1. Object tracking on mobile devices using binary descriptors

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Quraishi, Mohammad Faiz; Minnehan, Breton

    2015-03-01

    With the growing ubiquity of mobile devices, advanced applications are relying on computer vision techniques to provide novel experiences for users. Currently, few tracking approaches take into consideration the resource constraints on mobile devices. Designing efficient tracking algorithms and optimizing performance for mobile devices can result in better and more efficient tracking for applications, such as augmented reality. In this paper, we use binary descriptors, including Fast Retina Keypoint (FREAK), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Independent Features (BRIEF), and Binary Robust Invariant Scalable Keypoints (BRISK) to obtain real time tracking performance on mobile devices. We consider both Google's Android and Apple's iOS operating systems to implement our tracking approach. The Android implementation is done using Android's Native Development Kit (NDK), which gives the performance benefits of using native code as well as access to legacy libraries. The iOS implementation was created using both the native Objective-C and the C++ programing languages. We also introduce simplified versions of the BRIEF and BRISK descriptors that improve processing speed without compromising tracking accuracy.

  2. Management of three-dimensional intrafraction motion through real-time DMLC tracking.

    PubMed

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-05-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion.

  3. Configuration management issues and objectives for a real-time research flight test support facility

    NASA Technical Reports Server (NTRS)

    Yergensen, Stephen; Rhea, Donald C.

    1988-01-01

    Presented are some of the critical issues and objectives pertaining to configuration management for the NASA Western Aeronautical Test Range (WATR) of Ames Research Center. The primary mission of the WATR is to provide a capability for the conduct of aeronautical research flight test through real-time processing and display, tracking, and communications systems. In providing this capability, the WATR must maintain and enforce a configuration management plan which is independent of, but complimentary to, various research flight test project configuration management systems. A primary WATR objective is the continued development of generic research flight test project support capability, wherein the reliability of WATR support provided to all project users is a constant priority. Therefore, the processing of configuration change requests for specific research flight test project requirements must be evaluated within a perspective that maintains this primary objective.

  4. Gaze-contingent displays: a review.

    PubMed

    Duchowski, Andrew T; Cournia, Nathan; Murphy, Hunter

    2004-12-01

    Gaze-contingent displays (GCDs) attempt to balance the amount of information displayed against the visual information processing capacity of the observer through real-time eye movement sensing. Based on the assumed knowledge of the instantaneous location of the observer's focus of attention, GCD content can be "tuned" through several display processing means. Screen-based displays alter pixel level information generally matching the resolvability of the human retina in an effort to maximize bandwidth. Model-based displays alter geometric-level primitives along similar goals. Attentive user interfaces (AUIs) manage object- level entities (e.g., windows, applications) depending on the assumed attentive state of the observer. Such real-time display manipulation is generally achieved through non-contact, unobtrusive tracking of the observer's eye movements. This paper briefly reviews past and present display techniques as well as emerging graphics and eye tracking technology for GCD development.

  5. Wireless sensor networks for heritage object deformation detection and tracking algorithm.

    PubMed

    Xie, Zhijun; Huang, Guangyan; Zarei, Roozbeh; He, Jing; Zhang, Yanchun; Ye, Hongwu

    2014-10-31

    Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT). In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection.

  6. Wireless Sensor Networks for Heritage Object Deformation Detection and Tracking Algorithm

    PubMed Central

    Xie, Zhijun; Huang, Guangyan; Zarei, Roozbeh; He, Jing; Zhang, Yanchun; Ye, Hongwu

    2014-01-01

    Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT). In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection. PMID:25365458

  7. Development of real-time extensometer based on image processing

    NASA Astrophysics Data System (ADS)

    Adinanta, H.; Puranto, P.; Suryadi

    2017-04-01

    An extensometer system was developed by using high definition web camera as main sensor to track object position. The developed system applied digital image processing techniques. The image processing was used to measure the change of object position. The position measurement was done in real-time so that the system can directly showed the actual position in both x and y-axis. In this research, the relation between pixel and object position changes had been characterized. The system was tested by moving the target in a range of 20 cm in interval of 1 mm. To verify the long run performance, the stability and linearity of continuous measurements on both x and y-axis, this measurement had been conducted for 83 hours. The results show that this image processing-based extensometer had both good stability and linearity.

  8. Real time automated inspection

    DOEpatents

    Fant, Karl M.; Fundakowski, Richard A.; Levitt, Tod S.; Overland, John E.; Suresh, Bindinganavle R.; Ulrich, Franz W.

    1985-01-01

    A method and apparatus relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges are segmented out by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections.

  9. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  10. Siamese convolutional networks for tracking the spine motion

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong

    2017-09-01

    Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.

  11. Vision-based object detection and recognition system for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Liu, Henry X.; Martono, Wilfung

    1999-01-01

    Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.

  12. Assessing the performance of a motion tracking system based on optical joint transform correlation

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.

    2015-08-01

    We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.

  13. Tracking dynamic team activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tambe, M.

    1996-12-31

    AI researchers are striving to build complex multi-agent worlds with intended applications ranging from the RoboCup robotic soccer tournaments, to interactive virtual theatre, to large-scale real-world battlefield simulations. Agent tracking - monitoring other agent`s actions and inferring their higher-level goals and intentions - is a central requirement in such worlds. While previous work has mostly focused on tracking individual agents, this paper goes beyond by focusing on agent teams. Team tracking poses the challenge of tracking a team`s joint goals and plans. Dynamic, real-time environments add to the challenge, as ambiguities have to be resolved in real-time. The central hypothesismore » underlying the present work is that an explicit team-oriented perspective enables effective team tracking. This hypothesis is instantiated using the model tracing technology employed in tracking individual agents. Thus, to track team activities, team models are put to service. Team models are a concrete application of the joint intentions framework and enable an agent to track team activities, regardless of the agent`s being a collaborative participant or a non-participant in the team. To facilitate real-time ambiguity resolution with team models: (i) aspects of tracking are cast as constraint satisfaction problems to exploit constraint propagation techniques; and (ii) a cost minimality criterion is applied to constrain tracking search. Empirical results from two separate tasks in real-world, dynamic environments one collaborative and one competitive - are provided.« less

  14. Feasibility of using global system for mobile communication (GSM)-based tracking for vaccinators to improve oral poliomyelitis vaccine campaign coverage in rural Pakistan.

    PubMed

    Chandir, Subhash; Dharma, Vijay Kumar; Siddiqi, Danya Arif; Khan, Aamir Javed

    2017-09-05

    Despite multiple rounds of immunization campaigns, it has not been possible to achieve optimum immunization coverage for poliovirus in Pakistan. Supplementary activities to improve coverage of immunization, such as door-to-door campaigns are constrained by several factors including inaccurate hand-drawn maps and a lack of means to objectively monitor field teams in real time, resulting in suboptimal vaccine coverage during campaigns. Global System for Mobile Communications (GSM) - based tracking of mobile subscriber identity modules (SIMs) of vaccinators provides a low-cost solution to identify missed areas and ensure effective immunization coverage. We conducted a pilot study to investigate the feasibility of using GSM technology to track vaccinators through observing indicators including acceptability, ease of implementation, costs and scalability as well as the likelihood of ownership by District Health Officials. The real-time location of the field teams was displayed on a GSM tracking web dashboard accessible by supervisors and managers for effective monitoring of workforce attendance including 'time in-time out', and discerning if all target areas - specifically remote and high-risk locations - had been reached. Direct access to this information by supervisors eliminated the possibility of data fudging and inaccurate reporting by workers regarding their mobility. The tracking cost per vaccinator was USD 0.26/month. Our study shows that GSM-based tracking is potentially a cost-efficient approach, results in better monitoring and accountability, is scalable and provides the potential for improved geographic coverage of health services. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Clutter in electronic medical records: examining its performance and attentional costs using eye tracking.

    PubMed

    Moacdieh, Nadine; Sarter, Nadine

    2015-06-01

    The objective was to use eye tracking to trace the underlying changes in attention allocation associated with the performance effects of clutter, stress, and task difficulty in visual search and noticing tasks. Clutter can degrade performance in complex domains, yet more needs to be known about the associated changes in attention allocation, particularly in the presence of stress and for different tasks. Frequently used and relatively simple eye tracking metrics do not effectively capture the various effects of clutter, which is critical for comprehensively analyzing clutter and developing targeted, real-time countermeasures. Electronic medical records (EMRs) were chosen as the application domain for this research. Clutter, stress, and task difficulty were manipulated, and physicians' performance on search and noticing tasks was recorded. Several eye tracking metrics were used to trace attention allocation throughout those tasks, and subjective data were gathered via a debriefing questionnaire. Clutter degraded performance in terms of response time and noticing accuracy. These decrements were largely accentuated by high stress and task difficulty. Eye tracking revealed the underlying attentional mechanisms, and several display-independent metrics were shown to be significant indicators of the effects of clutter. Eye tracking provides a promising means to understand in detail (offline) and prevent (in real time) major performance breakdowns due to clutter. Display designers need to be aware of the risks of clutter in EMRs and other complex displays and can use the identified eye tracking metrics to evaluate and/or adjust their display. © 2015, Human Factors and Ergonomics Society.

  16. SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhlemann, I; Graduate School for Computing in Medicine and Life Sciences, University of Luebeck; Jauer, P

    Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety featuresmore » create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking applications, including image quality control and target tracking.« less

  17. MRI-based dynamic tracking of an untethered ferromagnetic microcapsule navigating in liquid

    NASA Astrophysics Data System (ADS)

    Dahmen, Christian; Belharet, Karim; Folio, David; Ferreira, Antoine; Fatikow, Sergej

    2016-04-01

    The propulsion of ferromagnetic objects by means of MRI gradients is a promising approach to enable new forms of therapy. In this work, necessary techniques are presented to make this approach work. This includes path planning algorithms working on MRI data, ferromagnetic artifact imaging and a tracking algorithm which delivers position feedback for the ferromagnetic objects, and a propulsion sequence to enable interleaved magnetic propulsion and imaging. Using a dedicated software environment, integrating path-planning methods and real-time tracking, a clinical MRI system is adapted to provide this new functionality for controlled interventional targeted therapeutic applications. Through MRI-based sensing analysis, this article aims to propose a framework to plan a robust pathway to enhance the navigation ability to reach deep locations in the human body. The proposed approaches are validated with different experiments.

  18. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  19. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  20. Tracking and nowcasting convective precipitation cells at European scale for transregional warnings

    NASA Astrophysics Data System (ADS)

    Meyer, Vera; Tüchler, Lukas

    2013-04-01

    A transregional overview of the current weather situation is considered as highly valuable information to assist forecasters as well as official authorities for disaster management in their decision making processes. The development of the European-wide radar composite OPERA enables for the first time a coherent object-oriented tracking and nowcasting of intense precipitation cells in real time at continental scale and at a resolution of 2 x 2 km² and 15 minutes. Recently, the object-oriented cell-tracking tool A-TNT (Austrian Thunderstorm Nowcasting Tool) has been developed at ZAMG. A-TNT utilizes the method of ec-TRAM [1]. It consists of two autonomously operating routines, which identify, track and nowcast radar- and lightning-cells separately. The two independent outputs are combined to a coherent storm monitoring and nowcasting in a final step. Within the framework of HAREN (Hazard Assessment based on Rainfall European Nowcasts), which is a project funded by the EC Directorate General for Humanitarian Aid and Civil Protection, A-TNT has been adapted to OPERA radar data. The objective of HAREN is the support of forecasters and official authorities in their decision-making processes concerning precipitation induced hazards with pan-European information. This study will present (1) the general performance of the object-oriented approach for thunderstorm tracking and nowcasting on continental scale giving insight into its current capabilities and limitations and (2) the utilization of object-oriented cell information for automated precipitation warnings carried out within the framework of HAREN. Data collected from April to October 2012 are used to assess the performance of cell-tracking based on radar data. Furthermore, the benefit of additional lightning information provided by the European Cooperation for Lightning Detection (EUCLID) for thunderstorm tracking and nowcasting will be summarized in selected analyses. REFERENCES: [1] Meyer, V. K., H. Höller, and H. D. Betz 2012: Automated thunderstorm tracking and nowcasting: utilization of three-dimensional lightning and radar data. Manuscript accepted for publication in ACPD.

  1. Real-Time Optical Surveillance of LEO/MEO with Small Telescopes

    NASA Astrophysics Data System (ADS)

    Zimmer, P.; McGraw, J.; Ackermann, M.

    J.T. McGraw and Associates, LLC operates two proof-of-concept wide-field imaging systems to test novel techniques for uncued surveillance of LEO/MEO/GEO and, in collaboration with the University of New Mexico (UNM), uses a third small telescope for rapidly queued same-orbit follow-up observations. Using our GPU-accelerated detection scheme, the proof-of-concept systems operating at sites near and within Albuquerque, NM, have detected objects fainter than V=13 at greater than 6 sigma significance. This detection approximately corresponds to a 16 cm object with albedo of 0.12 at 1000 km altitude. Dozens of objects are measured during each operational twilight period, many of which have no corresponding catalog object. The two proof-of-concept systems, separated by ~30km, work together by taking simultaneous images of the same orbital volume to constrain the orbits of detected objects using parallax measurements. These detections are followed-up by imaging photometric observations taken at UNM to confirm and further constrain the initial orbit determination and independently assess the objects and verify the quality of the derived orbits. This work continues to demonstrate that scalable optical systems designed for real-time detection of fast moving objects, which can be then handed off to other instruments capable of tracking and characterizing them, can provide valuable real-time surveillance data at LEO and beyond, which substantively informs the SSA process.

  2. A real-time sub-μrad laser beam tracking system

    NASA Astrophysics Data System (ADS)

    Buske, Ivo; Schragner, Ralph; Riede, Wolfgang

    2007-10-01

    We present a rugged and reliable real-time laser beam tracking system operating with a high speed, high resolution piezo-electric tip/tilt mirror. Characteristics of the piezo mirror and position sensor are investigated. An industrial programmable automation controller is used to develop a real-time digital PID controller. The controller provides a one million field programmable gate array (FPGA) to realize a high closed-loop frequency of 50 kHz. Beam tracking with a root-mean-squared accuracy better than 0.15 μrad has been laboratory confirmed. The system is intended as an add-on module for established mechanical mrad tracking systems.

  3. Optimization of RFID network planning using Zigbee and WSN

    NASA Astrophysics Data System (ADS)

    Hasnan, Khalid; Ahmed, Aftab; Badrul-aisham, Bakhsh, Qadir

    2015-05-01

    Everyone wants to be ease in their life. Radio frequency identification (RFID) wireless technology is used to make our life easier. RFID technology increases productivity, accuracy and convenience in delivery of service in supply chain. It is used for various applications such as preventing theft of automobiles, tolls collection without stopping, no checkout lines at grocery stores, managing traffic, hospital management, corporate campuses and airports, mobile asset tracking, warehousing, tracking library books, and to track a wealth of assets in supply chain management. Efficiency of RFID can be enhanced by integrating with wireless sensor network (WSN), zigbee mesh network and internet of things (IOT). The proposed system is used for identifying, sensing and real-time locating system (RTLS) of items in an indoor heterogeneous region. The system gives real-time richer information of object's characteristics, location and their environmental parameters like temperature, noise and humidity etc. RTLS reduce human error, optimize inventory management, increase productivity and information accuracy at indoor heterogeneous network. The power consumption and the data transmission rate of the system can be minimized by using low power hardware design.

  4. "SMALLab": Virtual Geology Studies Using Embodied Learning with Motion, Sound, and Graphics

    ERIC Educational Resources Information Center

    Johnson-Glenberg, Mina C.; Birchfield, David; Usyal, Sibel

    2009-01-01

    We present a new and innovative interface that allows the learner's body to move freely in a multimodal learning environment. The Situated Multimedia Arts Learning Laboratory ("SMALLab") uses 3D object tracking, real time graphics, and surround-sound to enhance embodied learning. Our hypothesis is that optimal learning and retention occur when…

  5. Real time automated inspection

    DOEpatents

    Fant, K.M.; Fundakowski, R.A.; Levitt, T.S.; Overland, J.E.; Suresh, B.R.; Ulrich, F.W.

    1985-05-21

    A method and apparatus are described relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections. 43 figs.

  6. The first patient treatment of electromagnetic-guided real time adaptive radiotherapy using MLC tracking for lung SABR.

    PubMed

    Booth, Jeremy T; Caillet, Vincent; Hardcastle, Nicholas; O'Brien, Ricky; Szymura, Kathryn; Crasta, Charlene; Harris, Benjamin; Haddad, Carol; Eade, Thomas; Keall, Paul J

    2016-10-01

    Real time adaptive radiotherapy that enables smaller irradiated volumes may reduce pulmonary toxicity. We report on the first patient treatment of electromagnetic-guided real time adaptive radiotherapy delivered with MLC tracking for lung stereotactic ablative body radiotherapy. A clinical trial was developed to investigate the safety and feasibility of MLC tracking in lung. The first patient was an 80-year old man with a single left lower lobe lung metastasis to be treated with SABR to 48Gy in 4 fractions. In-house software was integrated with a standard linear accelerator to adapt the treatment beam shape and position based on electromagnetic transponders implanted in the lung. MLC tracking plans were compared against standard ITV-based treatment planning. MLC tracking plan delivery was reconstructed in the patient to confirm safe delivery. Real time adaptive radiotherapy delivered with MLC tracking compared to standard ITV-based planning reduced the PTV by 41% (18.7-11cm 3 ) and the mean lung dose by 30% (202-140cGy), V20 by 35% (2.6-1.5%) and V5 by 9% (8.9-8%). An emerging technology, MLC tracking, has been translated into the clinic and used to treat lung SABR patients for the first time. This milestone represents an important first step for clinical real-time adaptive radiotherapy that could reduce pulmonary toxicity in lung radiotherapy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Tracking multiple particles in fluorescence time-lapse microscopy images via probabilistic data association.

    PubMed

    Godinez, William J; Rohr, Karl

    2015-02-01

    Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.

  8. D Tracking Based Augmented Reality for Cultural Heritage Data Management

    NASA Astrophysics Data System (ADS)

    Battini, C.; Landi, G.

    2015-02-01

    The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.

  9. The Waypoint Planning Tool: Real Time Flight Planning for Airborne Science

    NASA Astrophysics Data System (ADS)

    He, M.; Goodman, H. M.; Blakeslee, R.; Hall, J. M.

    2010-12-01

    NASA Earth science research utilizes both spaceborne and airborne real time observations in the planning and operations of its field campaigns. The coordination of air and space components is critical to achieve the goals and objectives and ensure the success of an experiment. Spaceborne imagery provides regular and continual coverage of the Earth and it is a significant component in all NASA field experiments. Real time visible and infrared geostationary images from GOES satellites and multi-spectral data from the many elements of the NASA suite of instruments aboard the TRMM, Terra, Aqua, Aura, and other NASA satellites have become norm. Similarly, the NASA Airborne Science Program draws upon a rich pool of instrumented aircraft. The NASA McDonnell Douglas DC-8, Lockheed P3 Orion, DeHavilland Twin Otter, King Air B200, Gulfstream-III are all staples of a NASA’s well-stocked, versatile hangar. A key component in many field campaigns is coordinating the aircraft with satellite overpasses, other airplanes and the constantly evolving, dynamic weather conditions. Given the variables involved, developing a good flight plan that meets the objectives of the field experiment can be a challenging and time consuming task. Planning a research aircraft mission within the context of meeting the science objectives is complex task because it is much more than flying from point A to B. Flight plans typically consist of flying a series of transects or involve dynamic path changes when “chasing” a hurricane or forest fire. These aircraft flight plans are typically designed by the mission scientists then verified and implemented by the navigator or pilot. Flight planning can be an arduous task requiring frequent sanity checks by the flight crew. This requires real time situational awareness of the weather conditions that affect the aircraft track. Scientists at the University of Alabama-Huntsville and the NASA Marshall Space Flight Center developed the Waypoint Planning Tool, an interactive software tool, that enables scientists to develop their own flight plans (also known as waypoints) with point-and-click mouse capabilities on a digital map draped with real time satellite imagery. The Waypoint Planning Tool has further advanced to include satellite orbit predictions and seamlessly interfaces with the Real Time Mission Monitor which tracks the aircraft’s position when the planes are flying. This presentation will describe the capabilities and features of the Waypoint Planning Tool highlighting the real time aspect, interactive nature and the resultant benefits to the airborne science community.

  10. The Waypoint Planning Tool: Real Time Flight Planning for Airborne Science

    NASA Technical Reports Server (NTRS)

    He, Yubin; Blakeslee, Richard; Goodman, Michael; Hall, John

    2010-01-01

    NASA Earth science research utilizes both spaceborne and airborne real time observations in the planning and operations of its field campaigns. The coordination of air and space components is critical to achieve the goals and objectives and ensure the success of an experiment. Spaceborne imagery provides regular and continual coverage of the Earth and it is a significant component in all NASA field experiments. Real time visible and infrared geostationary images from GOES satellites and multi-spectral data from the many elements of the NASA suite of instruments aboard the TRMM, Terra, Aqua, Aura, and other NASA satellites have become norm. Similarly, the NASA Airborne Science Program draws upon a rich pool of instrumented aircraft. The NASA McDonnell Douglas DC-8, Lockheed P3 Orion, DeHavilland Twin Otter, King Air B200, Gulfstream-III are all staples of a NASA's well-stocked, versatile hangar. A key component in many field campaigns is coordinating the aircraft with satellite overpasses, other airplanes and the constantly evolving, dynamic weather conditions. Given the variables involved, developing a good flight plan that meets the objectives of the field experiment can be a challenging and time consuming task. Planning a research aircraft mission within the context of meeting the science objectives is complex task because it is much more than flying from point A to B. Flight plans typically consist of flying a series of transects or involve dynamic path changes when "chasing" a hurricane or forest fire. These aircraft flight plans are typically designed by the mission scientists then verified and implemented by the navigator or pilot. Flight planning can be an arduous task requiring frequent sanity checks by the flight crew. This requires real time situational awareness of the weather conditions that affect the aircraft track. Scientists at the University of Alabama-Huntsville and the NASA Marshall Space Flight Center developed the Waypoint Planning Tool, an interactive software tool, that enables scientists to develop their own flight plans (also known as waypoints) with point-and-click mouse capabilities on a digital map draped with real time satellite imagery. The Waypoint Planning Tool has further advanced to include satellite orbit predictions and seamlessly interfaces with the Real Time Mission Monitor which tracks the aircraft s position when the planes are flying. This presentation will describe the capabilities and features of the Waypoint Planning Tool highlighting the real time aspect, interactive nature and the resultant benefits to the airborne science community.

  11. Real-time depth camera tracking with geometrically stable weight algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming

    2017-03-01

    We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.

  12. Event Management of RFID Data Streams: Fast Moving Consumer Goods Supply Chains

    NASA Astrophysics Data System (ADS)

    Mo, John P. T.; Li, Xue

    Radio Frequency Identification (RFID) is a wireless communication technology that uses radio-frequency waves to transfer information between tagged objects and readers without line of sight. This creates tremendous opportunities for linking real world objects into a world of "Internet of things". Application of RFID to Fast Moving Consumer Goods sector will introduce billions of RFID tags in the world. Almost everything is tagged for tracking and identification purposes. This phenomenon will impose a new challenge not only to the network capacity but also to the scalability of processing of RFID events and data. This chapter uses two national demonstrator projects in Australia as case studies to introduce an event managementframework to process high volume RFID data streams in real time and automatically transform physical RFID observations into business-level events. The model handles various temporal event patterns, both simple and complex, with temporal constraints. The model can be implemented in a data management architecture that allows global RFID item tracking and enables fast, large-scale RFID deployment.

  13. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  14. Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos

    NASA Astrophysics Data System (ADS)

    Juneja, Medha; Grover, Priyanka

    2013-12-01

    Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.

  15. Real-Time Tracking of Knee Adduction Moment in Patients with Knee Osteoarthritis

    PubMed Central

    Kang, Sang Hoon; Lee, Song Joo; Zhang, Li-Qun

    2014-01-01

    Background The external knee adduction moment (EKAM) is closely associated with the presence, progression, and severity of knee osteoarthritis (OA). However, there is a lack of convenient and practical method to estimate and track in real-time the EKAM of patients with knee OA for clinical evaluation and gait training, especially outside of gait laboratories. New Method A real-time EKAM estimation method was developed and applied to track and investigate the EKAM and other knee moments during stepping on an elliptical trainer in both healthy subjects and a patient with knee OA. Results Substantial changes were observed in the EKAM and other knee moments during stepping in the patient with knee OA. Comparison with Existing Method(s) This is the first study to develop and test feasibility of real-time tracking method of the EKAM on patients with knee OA using 3-D inverse dynamics. Conclusions The study provides us an accurate and practical method to evaluate in real-time the critical EKAM associated with knee OA, which is expected to help us to diagnose and evaluate patients with knee OA and provide the patients with real-time EKAM feedback rehabilitation training. PMID:24361759

  16. Handheld portable real-time tracking and communications device

    DOEpatents

    Wiseman, James M [Albuquerque, NM; Riblett, Jr., Loren E.; Green, Karl L [Albuquerque, NM; Hunter, John A [Albuquerque, NM; Cook, III, Robert N.; Stevens, James R [Arlington, VA

    2012-05-22

    Portable handheld real-time tracking and communications devices include; a controller module, communications module including global positioning and mesh network radio module, data transfer and storage module, and a user interface module enclosed in a water-resistant enclosure. Real-time tracking and communications devices can be used by protective force, security and first responder personnel to provide situational awareness allowing for enhance coordination and effectiveness in rapid response situations. Such devices communicate to other authorized devices via mobile ad-hoc wireless networks, and do not require fixed infrastructure for their operation.

  17. SU-G-JeP1-11: Feasibility Study of Markerless Tracking Using Dual Energy Fluoroscopic Images for Real-Time Tumor-Tracking Radiotherapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiinoki, T; Shibuya, K; Sawada, A

    Purpose: The new real-time tumor-tracking radiotherapy (RTRT) system was installed in our institution. This system consists of two x-ray tubes and color image intensifiers (I.I.s). The fiducial marker which was implanted near the tumor was tracked using color fluoroscopic images. However, the implantation of the fiducial marker is very invasive. Color fluoroscopic images enable to increase the recognition of the tumor. However, these images were not suitable to track the tumor without fiducial marker. The purpose of this study was to investigate the feasibility of markerless tracking using dual energy colored fluoroscopic images for real-time tumor-tracking radiotherapy system. Methods: Themore » colored fluoroscopic images of static and moving phantom that had the simulated tumor (30 mm diameter sphere) were experimentally acquired using the RTRT system. The programmable respiratory motion phantom was driven using the sinusoidal pattern in cranio-caudal direction (Amplitude: 20 mm, Time: 4 s). The x-ray condition was set to 55 kV, 50 mA and 105 kV, 50 mA for low energy and high energy, respectively. Dual energy images were calculated based on the weighted logarithmic subtraction of high and low energy images of RGB images. The usefulness of dual energy imaging for real-time tracking with an automated template image matching algorithm was investigated. Results: Our proposed dual energy subtraction improve the contrast between tumor and background to suppress the bone structure. For static phantom, our results showed that high tracking accuracy using dual energy subtraction images. For moving phantom, our results showed that good tracking accuracy using dual energy subtraction images. However, tracking accuracy was dependent on tumor position, tumor size and x-ray conditions. Conclusion: We indicated that feasibility of markerless tracking using dual energy fluoroscopic images for real-time tumor-tracking radiotherapy system. Furthermore, it is needed to investigate the tracking accuracy using proposed dual energy subtraction images for clinical cases.« less

  18. Graphical user interface concepts for tactical augmented reality

    NASA Astrophysics Data System (ADS)

    Argenta, Chris; Murphy, Anne; Hinton, Jeremy; Cook, James; Sherrill, Todd; Snarski, Steve

    2010-04-01

    Applied Research Associates and BAE Systems are working together to develop a wearable augmented reality system under the DARPA ULTRA-Vis program†. Our approach to achieve the objectives of ULTRAVis, called iLeader, incorporates a full color 40° field of view (FOV) see-thru holographic waveguide integrated with sensors for full position and head tracking to provide an unobtrusive information system for operational maneuvers. iLeader will enable warfighters to mark-up the 3D battle-space with symbologic identification of graphical control measures, friendly force positions and enemy/target locations. Our augmented reality display provides dynamic real-time painting of symbols on real objects, a pose-sensitive 360° representation of relevant object positions, and visual feedback for a variety of system activities. The iLeader user interface and situational awareness graphical representations are highly intuitive, nondisruptive, and always tactically relevant. We used best human-factors practices, system engineering expertise, and cognitive task analysis to design effective strategies for presenting real-time situational awareness to the military user without distorting their natural senses and perception. We present requirements identified for presenting information within a see-through display in combat environments, challenges in designing suitable visualization capabilities, and solutions that enable us to bring real-time iconic command and control to the tactical user community.

  19. U27 : real-time commercial vehicle safety & security monitoring final report.

    DOT National Transportation Integrated Search

    2012-12-01

    Accurate real-time vehicle tracking has a wide range of applications including fleet management, drug/speed/law enforcement, transportation planning, traffic safety, air quality, electronic tolling, and national security. While many alternative track...

  20. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  1. TH-AB-202-05: BEST IN PHYSICS (JOINT IMAGING-THERAPY): First Online Ultrasound-Guided MLC Tracking for Real-Time Motion Compensation in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ipsen, S; Bruder, R; Schweikard, A

    Purpose: While MLC tracking has been successfully used for motion compensation of moving targets, current real-time target localization methods rely on correlation models with x-ray imaging or implanted electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging yields volumetric data in real-time (4D) without ionizing radiation. We report the first results of online 4D ultrasound-guided MLC tracking in a phantom. Methods: A real-time tracking framework was installed on a 4D ultrasound station (Vivid7 dimension, GE) and used to detect a 2mm spherical lead marker inside a water tank. The volumetric frame rate was 21.3Hz (47ms). The marker wasmore » rigidly attached to a motion stage programmed to reproduce nine tumor trajectories (five prostate, four lung). The 3D marker position from ultrasound was used for real-time MLC aperture adaption. The tracking system latency was measured and compensated by prediction for lung trajectories. To measure geometric accuracy, anterior and lateral conformal fields with 10cm circular aperture were delivered for each trajectory. The tracking error was measured as the difference between marker position and MLC aperture in continuous portal imaging. For dosimetric evaluation, 358° VMAT fields were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using a 3%/3 mm γ-test. Results: The tracking system latency was 170ms. The mean root-mean-square tracking error was 1.01mm (0.75mm prostate, 1.33mm lung). Tracking reduced the mean γ-failure rate from 13.9% to 4.6% for prostate and from 21.8% to 0.6% for lung with high-modulation VMAT plans and from 5% (prostate) and 18% (lung) to 0% with low modulation. Conclusion: Real-time ultrasound tracking was successfully integrated with MLC tracking for the first time and showed similar accuracy and latency as other methods while holding the potential to measure target motion non-invasively. SI was supported by the Graduate School for Computing in Medicine and Life Science, German Excellence Initiative [grant DFG GSC 235/1].« less

  2. Near-real-time biplanar fluoroscopic tracking system for the video tumor fighter

    NASA Astrophysics Data System (ADS)

    Lawson, Michael A.; Wika, Kevin G.; Gilles, George T.; Ritter, Rogers C.

    1991-06-01

    We have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the 'Video Tumor Fighter' (VTF). The software was written for a Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally-oriented, visible- light cameras and a simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented.

  3. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    PubMed Central

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-01-01

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769

  4. EpxMedTracking: Feasibility Evaluation of an SMS-Based Medication Adherence Tracking System in Community Practice

    PubMed Central

    Tricarico, Christopher; Peters, Robert; Som, Avik; Javaherian, Kavon

    2017-01-01

    Background Medication adherence remains a difficult problem to both assess and improve in patients. It is a multifactorial problem that goes beyond the commonly cited reason of forgetfulness. To date, eHealth (also known as mHealth and telehealth) interventions to improve medication adherence have largely been successful in improving adherence. However, interventions to date have used time- and cost-intensive strategies or focused solely on medication reminding, leaving much room for improvement in using a modality as flexible as eHealth. Objective Our objective was to develop and implement a fully automated short message service (SMS)-based medication adherence system, EpxMedTracking, that reminds patients to take their medications, explores reasons for missed doses, and alerts providers to help address problems of medication adherence in real time. Methods EpxMedTracking is a fully automated bidirectional SMS-based messaging system with provider involvement that was developed and implemented through Epharmix, Inc. Researchers analyzed 11 weeks of de-identified data from patients cared for by multiple provider groups in routine community practice for feasibility and functionality. Patients included were those in the care of a provider purchasing the EpxMedTracking tool from Epharmix and were enrolled from a clinic by their providers. The primary outcomes assessed were the rate of engagement with the system, reasons for missing doses, and self-reported medication adherence. Results Of the 25 patients studied over the 11 weeks, 3 never responded and subsequently opted out or were deleted by their provider. No other patients opted out or were deleted during the study period. Across the 11 weeks of the study period, the overall weekly engagement rate was 85.9%. There were 109 total reported missed doses including “I forgot” at 33 events (30.3%), “I felt better” at 29 events (26.6%), “out of meds” at 20 events (18.4%), “I felt sick” at 19 events (17.4%), and “other” at 3 events (2.8%). We also noted an increase in self-reported medication adherence in patients using the EpxMedTracking system. Conclusions EpxMedTracking is an effective tool for tracking self-reported medication adherence over time. It uniquely identifies actionable reasons for missing doses for subsequent provider intervention in real time based on patient feedback. Patients enrolled on EpxMedTracking also self-report higher rates of medication adherence over time while on the system. PMID:28506954

  5. Results of analysis of archive MSG data in the context of MCS prediction system development for economic decisions assistance - case studies

    NASA Astrophysics Data System (ADS)

    Szafranek, K.; Jakubiak, B.; Lech, R.; Tomczuk, M.

    2012-04-01

    PROZA (Operational decision-making based on atmospheric conditions) is the project co-financed by the European Union through the European Regional Development Fund. One of its tasks is to develop the operational forecast system, which is supposed to support different economies branches like forestry or fruit farming by reducing the risk of economic decisions with taking into consideration weather conditions. In the frame of this studies system of sudden convective phenomena (storms or tornados) prediction is going to be built. The main authors' purpose is to predict MCSs (Mezoscale Convective Systems) basing on MSG (Meteosat Second Generation) real-time data. Until now several tests were performed. The Meteosat satellite images in selected spectral channels collected for Central Europe Region for May and August 2010 were used to detect and track cloud systems related to MCSs. In proposed tracking method first the cloud objects are defined using the temperature threshold and next the selected cells are tracked using principle of overlapping position on consecutive images. The main benefit to use a temperature thresholding to define cells is its simplicity. During the tracking process the algorithm links the cells of the image at time t to the one of the following image at time t+dt that correspond to the same cloud system (Morel-Senesi algorithm). An automated detection and elimination of some instabilities presented in tracking algorithm was developed. The poster presents analysis of exemplary MCSs in the context of near real-time prediction system development.

  6. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang

    2013-01-15

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle.more » While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The standard deviations of the results from the 6 researchers are 2.3 and 2.6 pixels. The proposed framework takes about 128 ms to detect four markers in the first MV images and about 23 ms to track these markers in each of the subsequent images. Conclusions: The unified framework for tracking of multiple markers presented here can achieve marker detection accuracy similar to manual detection even in low-contrast cine-MV images. It can cope with shape deformations of fiducial markers at different gantry angles. The fast processing speed reduces the image processing portion of the system latency, therefore can improve the performance of real-time motion compensation.« less

  7. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    PubMed Central

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  8. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    PubMed

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  9. Tracking a Non-Cooperative Target Using Real-Time Stereovision-Based Control: An Experimental Study.

    PubMed

    Shtark, Tomer; Gurfil, Pini

    2017-03-31

    Tracking a non-cooperative target is a challenge, because in unfamiliar environments most targets are unknown and unspecified. Stereovision is suited to deal with this issue, because it allows to passively scan large areas and estimate the relative position, velocity and shape of objects. This research is an experimental effort aimed at developing, implementing and evaluating a real-time non-cooperative target tracking methods using stereovision measurements only. A computer-vision feature detection and matching algorithm was developed in order to identify and locate the target in the captured images. Three different filters were designed for estimating the relative position and velocity, and their performance was compared. A line-of-sight control algorithm was used for the purpose of keeping the target within the field-of-view. Extensive analytical and numerical investigations were conducted on the multi-view stereo projection equations and their solutions, which were used to initialize the different filters. This research shows, using an experimental and numerical evaluation, the benefits of using the unscented Kalman filter and the total least squares technique in the stereovision-based tracking problem. These findings offer a general and more accurate method for solving the static and dynamic stereovision triangulation problems and the concomitant line-of-sight control.

  10. Tracking a Non-Cooperative Target Using Real-Time Stereovision-Based Control: An Experimental Study

    PubMed Central

    Shtark, Tomer; Gurfil, Pini

    2017-01-01

    Tracking a non-cooperative target is a challenge, because in unfamiliar environments most targets are unknown and unspecified. Stereovision is suited to deal with this issue, because it allows to passively scan large areas and estimate the relative position, velocity and shape of objects. This research is an experimental effort aimed at developing, implementing and evaluating a real-time non-cooperative target tracking methods using stereovision measurements only. A computer-vision feature detection and matching algorithm was developed in order to identify and locate the target in the captured images. Three different filters were designed for estimating the relative position and velocity, and their performance was compared. A line-of-sight control algorithm was used for the purpose of keeping the target within the field-of-view. Extensive analytical and numerical investigations were conducted on the multi-view stereo projection equations and their solutions, which were used to initialize the different filters. This research shows, using an experimental and numerical evaluation, the benefits of using the unscented Kalman filter and the total least squares technique in the stereovision-based tracking problem. These findings offer a general and more accurate method for solving the static and dynamic stereovision triangulation problems and the concomitant line-of-sight control. PMID:28362338

  11. Tracking Multiple People Online and in Real Time

    DTIC Science & Technology

    2015-12-21

    NO. 0704-0188 3. DATES COVERED (From - To) - UU UU UU UU 21-12-2015 Approved for public release; distribution is unlimited. Tracking multiple people ...online and in real time We cast the problem of tracking several people as a graph partitioning problem that takes the form of an NP-hard binary...PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. Duke University 2200 West Main Street Suite 710 Durham, NC 27705 -4010 ABSTRACT Tracking multiple

  12. A low cost real-time motion tracking approach using webcam technology.

    PubMed

    Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh

    2015-02-05

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. A low cost real-time motion tracking approach using webcam technology

    PubMed Central

    Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh

    2014-01-01

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306

  14. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  15. Particle Tracking Facilitates Real Time Capable Motion Correction in 2D or 3D Two-Photon Imaging of Neuronal Activity.

    PubMed

    Aghayee, Samira; Winkowski, Daniel E; Bowen, Zachary; Marshall, Erin E; Harrington, Matt J; Kanold, Patrick O; Losert, Wolfgang

    2017-01-01

    The application of 2-photon laser scanning microscopy (TPLSM) techniques to measure the dynamics of cellular calcium signals in populations of neurons is an extremely powerful technique for characterizing neural activity within the central nervous system. The use of TPLSM on awake and behaving subjects promises new insights into how neural circuit elements cooperatively interact to form sensory perceptions and generate behavior. A major challenge in imaging such preparations is unavoidable animal and tissue movement, which leads to shifts in the imaging location (jitter). The presence of image motion can lead to artifacts, especially since quantification of TPLSM images involves analysis of fluctuations in fluorescence intensities for each neuron, determined from small regions of interest (ROIs). Here, we validate a new motion correction approach to compensate for motion of TPLSM images in the superficial layers of auditory cortex of awake mice. We use a nominally uniform fluorescent signal as a secondary signal to complement the dynamic signals from genetically encoded calcium indicators. We tested motion correction for single plane time lapse imaging as well as multiplane (i.e., volume) time lapse imaging of cortical tissue. Our procedure of motion correction relies on locating the brightest neurons and tracking their positions over time using established techniques of particle finding and tracking. We show that our tracking based approach provides subpixel resolution without compromising speed. Unlike most established methods, our algorithm also captures deformations of the field of view and thus can compensate e.g., for rotations. Object tracking based motion correction thus offers an alternative approach for motion correction, one that is well suited for real time spike inference analysis and feedback control, and for correcting for tissue distortions.

  16. Particle Tracking Facilitates Real Time Capable Motion Correction in 2D or 3D Two-Photon Imaging of Neuronal Activity

    PubMed Central

    Aghayee, Samira; Winkowski, Daniel E.; Bowen, Zachary; Marshall, Erin E.; Harrington, Matt J.; Kanold, Patrick O.; Losert, Wolfgang

    2017-01-01

    The application of 2-photon laser scanning microscopy (TPLSM) techniques to measure the dynamics of cellular calcium signals in populations of neurons is an extremely powerful technique for characterizing neural activity within the central nervous system. The use of TPLSM on awake and behaving subjects promises new insights into how neural circuit elements cooperatively interact to form sensory perceptions and generate behavior. A major challenge in imaging such preparations is unavoidable animal and tissue movement, which leads to shifts in the imaging location (jitter). The presence of image motion can lead to artifacts, especially since quantification of TPLSM images involves analysis of fluctuations in fluorescence intensities for each neuron, determined from small regions of interest (ROIs). Here, we validate a new motion correction approach to compensate for motion of TPLSM images in the superficial layers of auditory cortex of awake mice. We use a nominally uniform fluorescent signal as a secondary signal to complement the dynamic signals from genetically encoded calcium indicators. We tested motion correction for single plane time lapse imaging as well as multiplane (i.e., volume) time lapse imaging of cortical tissue. Our procedure of motion correction relies on locating the brightest neurons and tracking their positions over time using established techniques of particle finding and tracking. We show that our tracking based approach provides subpixel resolution without compromising speed. Unlike most established methods, our algorithm also captures deformations of the field of view and thus can compensate e.g., for rotations. Object tracking based motion correction thus offers an alternative approach for motion correction, one that is well suited for real time spike inference analysis and feedback control, and for correcting for tissue distortions. PMID:28860973

  17. Tracking accuracy of a real-time fiducial tracking system for patient positioning and monitoring in radiation therapy.

    PubMed

    Shchory, Tal; Schifter, Dan; Lichtman, Rinat; Neustadter, David; Corn, Benjamin W

    2010-11-15

    In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive tracking system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. Fluctuations in Conjunction Miss Distance Projections as Time Approaches Time of Closest Approach

    NASA Technical Reports Server (NTRS)

    Christian, John A., III

    2005-01-01

    A responsibility of the Trajectory Operations Officer is to ensure that the International Space Station (ISS) avoids colliding with debris. United States Space Command (USSPACECOM) tracks and catalogs a portion of the debris in Earth orbit, but only objects with a perigee less than 600 km and a radar cross section (RCS) greater than 10 cm-objects that, in fact, represent only a small fraction of the objects in Earth orbit. To accommodate for this, the ISS uses shielding to protect against collisions with smaller objects. This study provides a better understanding of how quickly, and to what degree, USSPACECOM projections tend to converge to the final, true miss distance. The information included is formulated to better predict the behavior of miss distance data during real-time operations. It was determined that the driving components, in order of impact on miss distance fluctuations, are energy dissipation rate (EDR), RCS, and inclination. Data used in this analysis, calculations made, and conclusions drawn are stored in Microsoft Excel log sheets. A separate log sheet, created for each conjunction, contains information such as predicted miss distances, apogee and perigee of debris orbit, EDR, RCS, inclination, tracks and observations, statistical data, and other evaluation/orbital parameters.

  19. Determination of feature generation methods for PTZ camera object tracking

    NASA Astrophysics Data System (ADS)

    Doyle, Daniel D.; Black, Jonathan T.

    2012-06-01

    Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.

  20. Tracking multiple objects is limited only by object spacing, not by speed, time, or capacity.

    PubMed

    Franconeri, S L; Jonathan, S V; Scimeca, J M

    2010-07-01

    In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.

  1. Object classification for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Regensburger, Uwe; Graefe, Volker

    1991-03-01

    Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.

  2. An anti-disturbing real time pose estimation method and system

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Zhang, Xiao-hu

    2011-08-01

    Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  3. Transforming War Fighting through the Use of Service Based Architecture (SBA) Technology

    DTIC Science & Technology

    2006-05-04

    near-real-time video & telemetry to users on network using standard web-based protocols – Provides web-based access to archived video files MTI...Target Tracks Service Capabilities – Disseminates near-real-time MTI and Target Tracks to users on network based on consumer specified geographic...filter IBS SIGINT Service Capabilities – Disseminates near-real-time IBS SIGINT data to users on network based on consumer specified geographic filter

  4. Analysis of Plasma Bubble Signatures in the Ionosphere

    DTIC Science & Technology

    2011-03-01

    the equinoctial months resulted in greater slant TEC differences and, hence, greater communication problems. The results of this study not only...resulting in miscalculated enemy positions and misidentified space objects and orbit tracks. Errors in orbital positions could result in disastrous...uses a time-dependent physics-based model of the global ionosphere-plasmasphere and a Kalman filter as a basis for assimilating a diverse set of real

  5. A multispectral automatic target recognition application for maritime surveillance, search, and rescue

    NASA Astrophysics Data System (ADS)

    Schoonmaker, Jon; Reed, Scott; Podobna, Yuliya; Vazquez, Jose; Boucher, Cynthia

    2010-04-01

    Due to increased security concerns, the commitment to monitor and maintain security in the maritime environment is increasingly a priority. A country's coast is the most vulnerable area for the incursion of illegal immigrants, terrorists and contraband. This work illustrates the ability of a low-cost, light-weight, multi-spectral, multi-channel imaging system to handle the environment and see under difficult marine conditions. The system and its implemented detecting and tracking technologies should be organic to the maritime homeland security community for search and rescue, fisheries, defense, and law enforcement. It is tailored for airborne and ship based platforms to detect, track and monitor suspected objects (such as semi-submerged targets like marine mammals, vessels in distress, and drug smugglers). In this system, automated detection and tracking technology is used to detect, classify and localize potential threats or objects of interest within the imagery provided by the multi-spectral system. These algorithms process the sensor data in real time, thereby providing immediate feedback when features of interest have been detected. A supervised detection system based on Haar features and Cascade Classifiers is presented and results are provided on real data. The system is shown to be extendable and reusable for a variety of different applications.

  6. Real-time implementation of logo detection on open source BeagleBoard

    NASA Astrophysics Data System (ADS)

    George, M.; Kehtarnavaz, N.; Estevez, L.

    2011-03-01

    This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.

  7. Real-time seam tracking control system based on line laser visions

    NASA Astrophysics Data System (ADS)

    Zou, Yanbiao; Wang, Yanbo; Zhou, Weilin; Chen, Xiangzhi

    2018-07-01

    A set of six-degree-of-freedom robotic welding automatic tracking platform was designed in this study to realize the real-time tracking of weld seams. Moreover, the feature point tracking method and the adaptive fuzzy control algorithm in the welding process were studied and analyzed. A laser vision sensor and its measuring principle were designed and studied, respectively. Before welding, the initial coordinate values of the feature points were obtained using morphological methods. After welding, the target tracking method based on Gaussian kernel was used to extract the real-time feature points of the weld. An adaptive fuzzy controller was designed to input the deviation value of the feature points and the change rate of the deviation into the controller. The quantization factors, scale factor, and weight function were adjusted in real time. The input and output domains, fuzzy rules, and membership functions were constantly updated to generate a series of smooth bias robot voltage. Three groups of experiments were conducted on different types of curve welds in a strong arc and splash noise environment using the welding current of 120 A short-circuit Metal Active Gas (MAG) Arc Welding. The tracking error was less than 0.32 mm and the sensor's metrical frequency can be up to 20 Hz. The end of the torch run smooth during welding. Weld trajectory can be tracked accurately, thereby satisfying the requirements of welding applications.

  8. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  9. Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors

    PubMed Central

    Everding, Lukas; Conradt, Jörg

    2018-01-01

    In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. PMID:29515386

  10. A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors

    PubMed Central

    Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.

    2017-01-01

    Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563

  11. TH-AB-202-02: Real-Time Verification and Error Detection for MLC Tracking Deliveries Using An Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J Zwan, B; Central Coast Cancer Centre, Gosford, NSW; Colvill, E

    2016-06-15

    Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3)more » field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm{sup 2} (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.« less

  12. Tracking Accuracy of a Real-Time Fiducial Tracking System for Patient Positioning and Monitoring in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shchory, Tal; Schifter, Dan; Lichtman, Rinat

    Purpose: In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. Methods and Materials: The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive trackingmore » system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. Results: The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. Conclusions: This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy.« less

  13. Studying visual attention using the multiple object tracking paradigm: A tutorial review.

    PubMed

    Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus

    2017-07-01

    Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.

  14. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    PubMed

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  15. SU-D-201-05: On the Automatic Recognition of Patient Safety Hazards in a Radiotherapy Setup Using a Novel 3D Camera System and a Deep Learning Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanam, A; Min, Y; Beron, P

    Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter.more » To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.« less

  16. Tracks detection from high-orbit space objects

    NASA Astrophysics Data System (ADS)

    Shumilov, Yu. P.; Vygon, V. G.; Grishin, E. A.; Konoplev, A. O.; Semichev, O. P.; Shargorodskii, V. D.

    2017-05-01

    The paper presents studies results of a complex algorithm for the detection of highly orbital space objects. Before the implementation of the algorithm, a series of frames with weak tracks of space objects, which can be discrete, is recorded. The algorithm includes pre-processing, classical for astronomy, consistent filtering of each frame and its threshold processing, shear transformation, median filtering of the transformed series of frames, repeated threshold processing and detection decision making. Modeling of space objects weak tracks on of the night starry sky real frames obtained in the regime of a stationary telescope was carried out. It is shown that the permeability of an optoelectronic device has increased by almost 2m.

  17. Post-Newtonian equations of motion for LEO debris objects and space-based acquisition, pointing and tracking laser systems

    NASA Astrophysics Data System (ADS)

    Gambi, J. M.; García del Pino, M. L.; Gschwindl, J.; Weinmüller, E. B.

    2017-12-01

    This paper deals with the problem of throwing middle-sized low Earth orbit debris objects into the atmosphere via laser ablation. The post-Newtonian equations here provided allow (hypothetical) space-based acquisition, pointing and tracking systems endowed with very narrow laser beams to reach the pointing accuracy presently prescribed. In fact, whatever the orbital elements of these objects may be, these equations will allow the operators to account for the corrections needed to balance the deviations of the line of sight directions due to the curvature of the paths the laser beams are to travel along. To minimize the respective corrections, the systems will have to perform initial positioning manoeuvres, and the shooting point-ahead angles will have to be adapted in real time. The enclosed numerical experiments suggest that neglecting these measures will cause fatal errors, due to differences in the actual locations of the objects comparable to their size.

  18. Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.

    PubMed

    Souto, David; Kerzel, Dirk

    2013-02-06

    Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.

  19. Automated real time peg and tool detection for the FLS trainer box.

    PubMed

    Nemani, Arun; Sankaranarayanan, Ganesh

    2012-01-01

    This study proposes a method that effectively tracks trocar tool and peg positions in real time to allow real time assessment of the peg transfer task of the Fundamentals of Laparoscopic Surgery (FLS). By utilizing custom code along with OpenCV libraries, tool and peg positions can be accurately tracked without altering the original setup conditions of the FLS trainer box. This is achieved via a series of image filtration sequences, thresholding functions, and Haar training methods.

  20. Software for real-time localization of baleen whale calls using directional sonobuoys: A case study on Antarctic blue whales.

    PubMed

    Miller, Brian S; Calderan, Susannah; Gillespie, Douglas; Weatherup, Graham; Leaper, Russell; Collins, Kym; Double, Michael C

    2016-03-01

    Directional frequency analysis and recording (DIFAR) sonobuoys can allow real-time acoustic localization of baleen whales for underwater tracking and remote sensing, but limited availability of hardware and software has prevented wider usage. These software limitations were addressed by developing a module in the open-source software PAMGuard. A case study is presented demonstrating that this software provides greater efficiency and accessibility than previous methods for detecting, localizing, and tracking Antarctic blue whales in real time. Additionally, this software can easily be extended to track other low and mid frequency sounds including those from other cetaceans, pinnipeds, icebergs, shipping, and seismic airguns.

  1. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  2. Close to real-time robust pedestrian detection and tracking

    NASA Astrophysics Data System (ADS)

    Lipetski, Y.; Loibner, G.; Sidla, O.

    2015-03-01

    Fully automated video based pedestrian detection and tracking is a challenging task with many practical and important applications. We present our work aimed to allow robust and simultaneously close to real-time tracking of pedestrians. The presented approach is stable to occlusions, lighting conditions and is generalized to be applied on arbitrary video data. The core tracking approach is built upon tracking-by-detections principle. We describe our cascaded HOG detector with successive CNN verification in detail. For the tracking and re-identification task, we did an extensive analysis of appearance based features as well as their combinations. The tracker was tested on many hours of video data for different scenarios; the results are presented and discussed.

  3. Fast Deep Tracking via Semi-Online Domain Adaptation

    NASA Astrophysics Data System (ADS)

    Li, Xiaoping; Luo, Wenbing; Zhu, Yi; Li, Hanxi; Wang, Mingwen

    2018-04-01

    Deep tracking has been illustrating overwhelming superiorities over the shallow methods. Unfortunately, it also suffers from low FPS rates. To alleviate the problem, a number of real-time deep trackers have been proposed via removing the online updating procedure on the CNN model. However, the absent of the online update leads to a significant drop on tracking accuracy. In this work, we propose to perform the domain adaptation for visual tracking in two stages for transferring the information from the visual tracking domain and the instance domain respectively. In this way, the proposed visual tracker achieves comparable tracking accuracy to the state-of-the-art trackers and runs at real-time speed on an average consuming GPU.

  4. Event detection for car park entries by video-surveillance

    NASA Astrophysics Data System (ADS)

    Coquin, Didier; Tailland, Johan; Cintract, Michel

    2007-10-01

    Intelligent surveillance has become an important research issue due to the high cost and low efficiency of human supervisors, and machine intelligence is required to provide a solution for automated event detection. In this paper we describe a real-time system that has been used for detecting car park entries, using an adaptive background learning algorithm and two indicators representing activity and identity to overcome the difficulty of tracking objects.

  5. Shape and texture fused recognition of flying targets

    NASA Astrophysics Data System (ADS)

    Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás

    2011-06-01

    This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).

  6. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    NASA Astrophysics Data System (ADS)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.

  7. Evaluation of Dose Uncertainty to the Target Associated With Real-Time Tracking Intensity-Modulated Radiation Therapy Using the CyberKnife Synchrony System.

    PubMed

    Iwata, Hiromitsu; Inoue, Mitsuhiro; Shiomi, Hiroya; Murai, Taro; Tatewaki, Koshi; Ohta, Seiji; Okawa, Kohei; Yokota, Naoki; Shibamoto, Yuta

    2016-02-01

    We investigated the dose uncertainty caused by errors in real-time tracking intensity-modulated radiation therapy (IMRT) using the CyberKnife Synchrony Respiratory Tracking System (SRTS). Twenty lung tumors that had been treated with non-IMRT real-time tracking using CyberKnife SRTS were used for this study. After validating the tracking error in each case, we did 40 IMRT planning using 8 different collimator sizes for the 20 patients. The collimator size was determined for each planning target volume (PTV); smaller ones were one-half, and larger ones three-quarters, of the PTV diameter. The planned dose was 45 Gy in 4 fractions prescribed at 95% volume border of the PTV. Thereafter, the tracking error in each case was substituted into calculation software developed in house and randomly added in the setting of each beam. The IMRT planning incorporating tracking errors was simulated 1000 times, and various dose data on the clinical target volume (CTV) were compared with the original data. The same simulation was carried out by changing the fraction number from 1 to 6 in each IMRT plan. Finally, a total of 240 000 plans were analyzed. With 4 fractions, the change in the CTV maximum and minimum doses was within 3.0% (median) for each collimator. The change in D99 and D95 was within 2.0%. With decreases in the fraction number, the CTV coverage rate and the minimum dose decreased and varied greatly. The accuracy of real-time tracking IMRT delivered in 4 fractions using CyberKnife SRTS was considered to be clinically acceptable. © The Author(s) 2014.

  8. A low-cost test-bed for real-time landmark tracking

    NASA Astrophysics Data System (ADS)

    Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher

    2007-04-01

    A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.

  9. Object Tracking Vision System for Mapping the UCN τ Apparatus Volume

    NASA Astrophysics Data System (ADS)

    Lumb, Rowan; UCNtau Collaboration

    2016-09-01

    The UCN τ collaboration has an immediate goal to measure the lifetime of the free neutron to within 0.1%, i.e. about 1 s. The UCN τ apparatus is a magneto-gravitational ``bottle'' system. This system holds low energy, or ultracold, neutrons in the apparatus with the constraint of gravity, and keeps these low energy neutrons from interacting with the bottle via a strong 1 T surface magnetic field created by a bowl-shaped array of permanent magnets. The apparatus is wrapped with energized coils to supply a magnetic field throughout the ''bottle'' volume to prevent depolarization of the neutrons. An object-tracking stereo-vision system will be presented that precisely tracks a Hall probe and allows a mapping of the magnetic field throughout the volume of the UCN τ bottle. The stereo-vision system utilizes two cameras and open source openCV software to track an object's 3-d position in space in real time. The desired resolution is +/-1 mm resolution along each axis. The vision system is being used as part of an even larger system to map the magnetic field of the UCN τ apparatus and expose any possible systematic effects due to field cancellation or low field points which could allow neutrons to depolarize and possibly escape from the apparatus undetected. Tennessee Technological University.

  10. SU-D-207-05: Real-Time Intrafractional Motion Tracking During VMAT Delivery Using a Conventional Elekta CBCT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Yang-Kyun; Sharp, Gregory C.; Gierga, David P.

    2015-06-15

    Purpose: Real-time kV projection streaming capability has become recently available for Elekta XVI version 5.0. This study aims to investigate the feasibility and accuracy of real-time fiducial marker tracking during CBCT acquisition with or without simultaneous VMAT delivery using a conventional Elekta linear accelerator. Methods: A client computer was connected to an on-board kV imaging system computer, and receives and processes projection images immediately after image acquisition. In-house marker tracking software based on FFT normalized cross-correlation was developed and installed in the client computer. Three gold fiducial markers with 3 mm length were implanted in a pelvis-shaped phantom with 36more » cm width. The phantom was placed on a programmable motion platform oscillating in anterior-posterior and superior-inferior directions simultaneously. The marker motion was tracked in real-time for (1) a kV-only CBCT scan with treatment beam off and (2) a kV CBCT scan during a 6-MV VMAT delivery. The exposure parameters per projection were 120 kVp and 1.6 mAs. Tracking accuracy was assessed by comparing superior-inferior positions between the programmed and tracked trajectories. Results: The projection images were successfully transferred to the client computer at a frequency of about 5 Hz. In the kV-only scan, highly accurate marker tracking was achieved over the entire range of cone-beam projection angles (detection rate / tracking error were 100.0% / 0.6±0.5 mm). In the kV-VMAT scan, MV-scatter degraded image quality, particularly for lateral projections passing through the thickest part of the phantom (kV source angle ranging 70°-110° and 250°-290°), resulting in a reduced detection rate (90.5%). If the lateral projections are excluded, tracking performance was comparable to the kV-only case (detection rate / tracking error were 100.0% / 0.8±0.5 mm). Conclusion: Our phantom study demonstrated a promising Result for real-time motion tracking using a conventional Elekta linear accelerator. MV-scatter suppression is needed to improve tracking accuracy during MV delivery. This research is funded by Motion Management Research Grant from Elekta.« less

  11. Multidimensional evaluation of a radio frequency identification wi-fi location tracking system in an acute-care hospital setting

    PubMed Central

    Okoniewska, Barbara; Graham, Alecia; Gavrilova, Marina; Wah, Dannel; Gilgen, Jonathan; Coke, Jason; Burden, Jack; Nayyar, Shikha; Kaunda, Joseph; Yergens, Dean; Baylis, Barry

    2012-01-01

    Real-time locating systems (RTLS) have the potential to enhance healthcare systems through the live tracking of assets, patients and staff. This study evaluated a commercially available RTLS system deployed in a clinical setting, with three objectives: (1) assessment of the location accuracy of the technology in a clinical setting; (2) assessment of the value of asset tracking to staff; and (3) assessment of threshold monitoring applications developed for patient tracking and inventory control. Simulated daily activities were monitored by RTLS and compared with direct research team observations. Staff surveys and interviews concerning the system's effectiveness and accuracy were also conducted and analyzed. The study showed only modest location accuracy, and mixed reactions in staff interviews. These findings reveal that the technology needs to be refined further for better specific location accuracy before full-scale implementation can be recommended. PMID:22298566

  12. Dual Use of Image Based Tracking Techniques: Laser Eye Surgery and Low Vision Prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Barton, R. Shane

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  13. Dual use of image based tracking techniques: Laser eye surgery and low vision prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  14. Multidimensional evaluation of a radio frequency identification wi-fi location tracking system in an acute-care hospital setting.

    PubMed

    Okoniewska, Barbara; Graham, Alecia; Gavrilova, Marina; Wah, Dannel; Gilgen, Jonathan; Coke, Jason; Burden, Jack; Nayyar, Shikha; Kaunda, Joseph; Yergens, Dean; Baylis, Barry; Ghali, William A

    2012-01-01

    Real-time locating systems (RTLS) have the potential to enhance healthcare systems through the live tracking of assets, patients and staff. This study evaluated a commercially available RTLS system deployed in a clinical setting, with three objectives: (1) assessment of the location accuracy of the technology in a clinical setting; (2) assessment of the value of asset tracking to staff; and (3) assessment of threshold monitoring applications developed for patient tracking and inventory control. Simulated daily activities were monitored by RTLS and compared with direct research team observations. Staff surveys and interviews concerning the system's effectiveness and accuracy were also conducted and analyzed. The study showed only modest location accuracy, and mixed reactions in staff interviews. These findings reveal that the technology needs to be refined further for better specific location accuracy before full-scale implementation can be recommended.

  15. Vehicle Detection with Occlusion Handling, Tracking, and OC-SVM Classification: A High Performance Vision-Based System

    PubMed Central

    Velazquez-Pupo, Roxana; Sierra-Romero, Alberto; Torres-Roman, Deni; Shkvarko, Yuriy V.; Romero-Delgado, Misael

    2018-01-01

    This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles. PMID:29382078

  16. An automated data exploitation system for airborne sensors

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    Advanced wide area persistent surveillance (WAPS) sensor systems on manned or unmanned airborne vehicles are essential for wide-area urban security monitoring in order to protect our people and our warfighter from terrorist attacks. Currently, human (imagery) analysts process huge data collections from full motion video (FMV) for data exploitation and analysis (real-time and forensic), providing slow and inaccurate results. An Automated Data Exploitation System (ADES) is urgently needed. In this paper, we present a recently developed ADES for airborne vehicles under heavy urban background clutter conditions. This system includes four processes: (1) fast image registration, stabilization, and mosaicking; (2) advanced non-linear morphological moving target detection; (3) robust multiple target (vehicles, dismounts, and human) tracking (up to 100 target tracks); and (4) moving or static target/object recognition (super-resolution). Test results with real FMV data indicate that our ADES can reliably detect, track, and recognize multiple vehicles under heavy urban background clutters. Furthermore, our example shows that ADES as a baseline platform can provide capability for vehicle abnormal behavior detection to help imagery analysts quickly trace down potential threats and crimes.

  17. A feasibility study of stationary and dual-axis tracking grid-connected photovoltaic systems in the Upper Midwest

    NASA Astrophysics Data System (ADS)

    Warren, Ryan Duwain

    Three primary objectives were defined for this work. The first objective was to determine, assess, and compare the performance, heat transfer characteristics, economics, and feasibility of real-world stationary and dual-axis tracking grid-connected photovoltaic (PV) systems in the Upper Midwest. This objective was achieved by installing two grid-connected PV systems with different mounting schemes in central Iowa, implementing extensive data acquisition systems, monitoring operation of the PV systems for one full year, and performing detailed experimental performance and economic studies. The two PV systems that were installed, monitored, and analyzed included a 4.59 kWp roof-mounted stationary system oriented for maximum annual energy production, and a 1.02 kWp pole-mounted actively controlled dual-axis tracking system. The second objective was to demonstrate the actual use and performance of real-world stationary and dual-axis tracking grid-connected PV systems used for building energy generation applications. This objective was achieved by offering the installed PV systems to the public for demonstration purposes and through the development of three computer-based tools: a software interface that has the ability to display real-time and historical performance and meteorological data of both systems side-by-side, a software interface that shows real-time and historical video and photographs of each system, and a calculator that can predict performance and economics of stationary and dual-axis tracking grid-connected PV systems at various locations in the United States. The final objective was to disseminate this work to social, professional, scientific, and academic communities in a way that is applicable, objective, accurate, accessible, and comprehensible. This final objective will be addressed by publishing the results of this work and making the computer-based tools available on a public website (www.energy.iastate.edu/Renewable/solar). Detailed experimental performance analyses were performed for both systems; results were quantified and compared between systems, focusing on measures of solar resource, energy generation, power production, and efficiency. This work also presents heat transfer characteristics of both arrays and quantifies the affects of operating temperature on PV system performance in terms of overall heat transfer coefficients and temperature coefficients for power. To assess potential performance of PV in the Upper Midwest, models were built to predict performance of the PV systems operating at lower temperatures. Economic analyses were performed for both systems focusing on measures of life-cycle cost, payback period, internal rate of return, and average incremental cost of solar energy. The potential economic feasibility of grid-connected stationary PV systems used for building energy generation in the Upper Midwest was assessed under assumptions of higher utility energy costs, lower initial installed costs, and different metering agreements. The annual average daily solar insolation seen by the stationary and dual-axis tracking systems was found to be 4.37 and 5.95 kWh/m2, respectively. In terms of energy generation, the tracking system outperformed the stationary system on annual, monthly, and often daily bases; normalized annual energy generation for the tracking and stationary systems were found to be 1,779 and 1,264 kWh/kWp, respectively. The annual average conversion efficiencies of the tracking and stationary systems were found to be approximately 11 and 10.7 percent, respectively. Annual performance ratio values of the tracking and stationary system were found to be 0.819 and 0.792, respectively. The net present values of both systems under all assumed discount rates were determined to be negative. Further, neither system was found to have a payback period less than the assumed system life of 25 years. The rate-of-return of the stationary and tracking systems were found to be -3.3 and -4.9 percent, respectively. Furthermore, the average incremental cost of energy provided by the stationary and dual-axis tracking systems over their assumed useful life is projected to be 0.31 and 0.37 dollars per kWh, respectively. Results of this study suggest that grid-connected PV systems used for building energy generation in the Upper Midwest are not yet economically feasible when compared to a range of alternative investments; however, PV systems could show feasibility under more favorable economic scenarios. Throughout the year of monitoring, array operating temperatures ranged from -24.7°C (-12.4°F) to 61.7°C (143.1°F) for the stationary system and -23.9 °C (-11°F) to 52.7°C (126.9°F) for the dual-axis tracking system during periods of system operation. The hourly average overall heat transfer coefficients for solar irradiance levels greater than 200 W/m 2 for the stationary and dual-axis tracking systems were found to be 20.8 and 29.4 W/m2°C, respectively. The experimental temperature coefficients for power for the stationary and dual-axis tracking systems at a solar irradiance level of 1,000 W/m2 were -0.30 and -0.38 %/°C, respectively. Simulations of the stationary and dual-axis tracking systems operating at lower temperatures suggest that annual conversion efficiencies could potentially be increased by to up 4.3 and 4.6 percent, respectively.

  18. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  19. Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Baskaran, Subbiah; Noever, D.

    1999-01-01

    Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.

  20. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brix, Lau, E-mail: lau.brix@stab.rm.dk; Ringgaard, Steffen; Sørensen, Thomas Sangild

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (ormore » tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal, and coronal 2D MRI series yielded 3D respiratory motion curves for all volunteers. The motion directionality and amplitude were very similar when measured directly as in-plane motion or estimated indirectly as through-plane motion. The mean peak-to-peak breathing amplitude was 1.6 mm (left-right), 11.0 mm (craniocaudal), and 2.5 mm (anterior-posterior). The position of the watermelon structure was estimated in 2D MRI images with a root-mean-square error of 0.52 mm (in-plane) and 0.87 mm (through-plane). Conclusions: A method for 3D tracking in 2D MRI series was developed and demonstrated for liver tracking in volunteers. The method would allow real-time 3D localization with integrated MR-Linac systems.« less

  1. Improvement in the workflow efficiency of treating non-emergency outpatients by using a WLAN-based real-time location system in a level I trauma center.

    PubMed

    Stübig, Timo; Suero, Eduardo; Zeckey, Christian; Min, William; Janzen, Laura; Citak, Musa; Krettek, Christian; Hüfner, Tobias; Gaulke, Ralph

    2013-01-01

    Patient localization can improve workflow in outpatient settings, which might lead to lower costs. The existing wireless local area network (WLAN) architecture in many hospitals opens up the possibility of adopting real-time patient tracking systems for capturing and processing position data; once captured, these data can be linked with clinical patient data. To analyze the effect of a WLAN-based real-time patient localization system for tracking outpatients in our level I trauma center. Outpatients from April to August 2009 were included in the study, which was performed in two different stages. In phase I, patient tracking was performed with the real-time location system, but acquired data were not displayed to the personnel. In phase II tracking, the acquired data were automatically collected and displayed. Total treatment time was the primary outcome parameter. Statistical analysis was performed using multiple linear regression, with the significance level set at 0.05. Covariates included sex, age, type of encounter, prioritization, treatment team, number of residents, and radiographic imaging. 1045 patients were included in our study (540 in phase I and 505 in phase 2). An overall improvement of efficiency, as determined by a significantly decreased total treatment time (23.7%) from phase I to phase II, was noted. Additionally, significantly lower treatment times were noted for phase II patients even when other factors were considered (increased numbers of residents, the addition of imaging diagnostics, and comparison among various localization zones). WLAN-based real-time patient localization systems can reduce process inefficiencies associated with manual patient identification and tracking.

  2. Real time markerless motion tracking using linked kinematic chains

    DOEpatents

    Luck, Jason P [Arvada, CO; Small, Daniel E [Albuquerque, NM

    2007-08-14

    A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.

  3. A mobile asset sharing policy for hospitals with real time locating systems.

    PubMed

    Demircan-Yıldız, Ece Arzu; Fescioglu-Unver, Nilgun

    2016-01-01

    Each year, hospitals lose a considerable amount of time and money due to misplaced mobile assets. In addition the assets which remain in departments that frequently use them depreciate early, while other assets of the same type in different departments are rarely used. A real time locating system can prevent these losses when used with appropriate asset sharing policies. This research quantifies the amount of time a medium size hospital saves by using real time locating system and proposes an asset selection rule to eliminate the asset usage imbalance problem. The asset selection rule proposed is based on multi objective optimization techniques. The effectiveness of this rule on asset to patient time and asset utilization rate variance performance measures were tested using discrete event simulation method. Results show that the proposed asset selection rule improved the usage balance significantly. Sensitivity analysis showed that the proposed rule is robust to changes in demand rates and user preferences. Real time locating systems enable saving considerable amount of time in hospitals, and they can still be improved by integrating decision support mechanisms. Combining tracking technology and asset selection rules helps improve healthcare services.

  4. Observing Ocean Ecosystems with Sonar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Maxwell, Adam R.; Ham, Kenneth D.

    2016-12-01

    We present a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) is built to connect to an instrumentation network, where it consumes a real-time stream of sonar data and archives tracking and biomass data.

  5. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  6. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  7. Nekton Interaction Monitoring System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-03-15

    The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less

  8. Real-time optical multiple object recognition and tracking system and method

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Liu, Hua-Kuang (Inventor)

    1990-01-01

    System for optically recognizing and tracking a plurality of objects within a field of vision. Laser (46) produces a coherent beam (48). Beam splitter (24) splits the beam into object (26) and reference (28) beams. Beam expanders (50) and collimators (52) transform the beams (26, 28) into coherent collimated light beams (26', 28'). A two-dimensional SLM (54), disposed in the object beam (26'), modulates the object beam with optical information as a function of signals from a first camera (16) which develops X and Y signals reflecting the contents of its field of vision. A hololens (38), positioned in the object beam (26') subsequent to the modulator (54), focuses the object beam at a plurality of focal points (42). A planar transparency-forming film (32), disposed with the focal points on an exposable surface, forms a multiple position interference filter (62) upon exposure of the surface and development processing of the film (32). A reflector (53) directing the reference beam (28') onto the film (32), exposes the surface, with images focused by the hololens (38), to form interference patterns on the surface. There is apparatus (16', 64) for sensing and indicating light passage through respective ones of the positions of the filter (62), whereby recognition of objects corresponding to respective ones of the positions of the filter (62) is affected. For tracking, apparatus (64) focuses light passing through the filter (62) onto a matrix of CCD's in a second camera (16') to form a two-dimensional display of the recognized objects.

  9. Simultaneous localization and calibration for electromagnetic tracking systems.

    PubMed

    Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor

    2016-06-01

    In clinical environments, field distortion can cause significant electromagnetic tracking errors. Therefore, dynamic calibration of electromagnetic tracking systems is essential to compensate for measurement errors. It is proposed to integrate the motion model of the tracked instrument with redundant EM sensor observations and to apply a simultaneous localization and mapping algorithm in order to accurately estimate the pose of the instrument and create a map of the field distortion in real-time. Experiments were conducted in the presence of ferromagnetic and electrically-conductive field distorting objects and results compared with those of a conventional sensor fusion approach. The proposed method reduced the tracking error from 3.94±1.61 mm to 1.82±0.62 mm in the presence of steel, and from 0.31±0.22 mm to 0.11±0.14 mm in the presence of aluminum. With reduced tracking error and independence from external tracking devices or pre-operative calibrations, the approach is promising for reliable EM navigation in various clinical procedures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Automated Cloud Observation for Ground Telescope Optimization

    NASA Astrophysics Data System (ADS)

    Lane, B.; Jeffries, M. W., Jr.; Therien, W.; Nguyen, H.

    As the number of man-made objects placed in space each year increases with advancements in commercial, academic and industry, the number of objects required to be detected, tracked, and characterized continues to grow at an exponential rate. Commercial companies, such as ExoAnalytic Solutions, have deployed ground based sensors to maintain track custody of these objects. For the ExoAnalytic Global Telescope Network (EGTN), observation of such objects are collected at the rate of over 10 million unique observations per month (as of September 2017). Currently, the EGTN does not optimally collect data on nights with significant cloud levels. However, a majority of these nights prove to be partially cloudy providing clear portions in the sky for EGTN sensors to observe. It proves useful for a telescope to utilize these clear areas to continue resident space object (RSO) observation. By dynamically updating the tasking with the varying cloud positions, the number of observations could potentially increase dramatically due to increased persistence, cadence, and revisit. This paper will discuss the recent algorithms being implemented within the EGTN, including the motivation, need, and general design. The use of automated image processing as well as various edge detection methods, including Canny, Sobel, and Marching Squares, on real-time large FOV images of the sky enhance the tasking and scheduling of a ground based telescope is discussed in Section 2. Implementations of these algorithms on single and expanding to multiple telescopes, will be explored. Results of applying these algorithms to the EGTN in real-time and comparison to non-optimized EGTN tasking is presented in Section 3. Finally, in Section 4 we explore future work in applying these throughout the EGTN as well as other optical telescopes.

  11. Vision-based augmented reality system

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan

    2003-04-01

    The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.

  12. Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO

    PubMed Central

    Braaf, Boy; Vienola, Kari V.; Sheehy, Christy K.; Yang, Qiang; Vermeer, Koenraad A.; Tiruveedhula, Pavan; Arathorn, David W.; Roorda, Austin; de Boer, Johannes F.

    2012-01-01

    In phase-resolved OCT angiography blood flow is detected from phase changes in between A-scans that are obtained from the same location. In ophthalmology, this technique is vulnerable to eye motion. We address this problem by combining inter-B-scan phase-resolved OCT angiography with real-time eye tracking. A tracking scanning laser ophthalmoscope (TSLO) at 840 nm provided eye tracking functionality and was combined with a phase-stabilized optical frequency domain imaging (OFDI) system at 1040 nm. Real-time eye tracking corrected eye drift and prevented discontinuity artifacts from (micro)saccadic eye motion in OCT angiograms. This improved the OCT spot stability on the retina and consequently reduced the phase-noise, thereby enabling the detection of slower blood flows by extending the inter-B-scan time interval. In addition, eye tracking enabled the easy compounding of multiple data sets from the fovea of a healthy volunteer to create high-quality eye motion artifact-free angiograms. High-quality images are presented of two distinct layers of vasculature in the retina and the dense vasculature of the choroid. Additionally we present, for the first time, a phase-resolved OCT angiogram of the mesh-like network of the choriocapillaris containing typical pore openings. PMID:23304647

  13. Resonator memories and optical novelty filters

    NASA Astrophysics Data System (ADS)

    Anderson, Dana Z.; Erle, Marie C.

    Optical resonators having holographic elements are potential candidates for storing information that can be accessed through content addressable or associative recall. Closely related to the resonator memory is the optical novelty filter, which can detect the differences between a test object and a set of reference objects. We discuss implementations of these devices using continuous optical media such as photorefractive materials. The discussion is framed in the context of neural network models. There are both formal and qualitative similarities between the resonator memory and optical novelty filter and network models. Mode competition arises in the theory of the resonator memory, much as it does in some network models. We show that the role of the phenomena of "daydreaming" in the real-time programmable optical resonator is very much akin to the role of "unlearning" in neural network memories. The theory of programming the real-time memory for a single mode is given in detail. This leads to a discussion of the optical novelty filter. Experimental results for the resonator memory, the real-time programmable memory, and the optical tracking novelty filter are reviewed. We also point to several issues that need to be addressed in order to implement more formal models of neural networks.

  14. Resonator Memories And Optical Novelty Filters

    NASA Astrophysics Data System (ADS)

    Anderson, Dana Z.; Erie, Marie C.

    1987-05-01

    Optical resonators having holographic elements are potential candidates for storing information that can be accessed through content-addressable or associative recall. Closely related to the resonator memory is the optical novelty filter, which can detect the differences between a test object and a set of reference objects. We discuss implementations of these devices using continuous optical media such as photorefractive ma-terials. The discussion is framed in the context of neural network models. There are both formal and qualitative similarities between the resonator memory and optical novelty filter and network models. Mode competition arises in the theory of the resonator memory, much as it does in some network models. We show that the role of the phenomena of "daydream-ing" in the real-time programmable optical resonator is very much akin to the role of "unlearning" in neural network memories. The theory of programming the real-time memory for a single mode is given in detail. This leads to a discussion of the optical novelty filter. Experimental results for the resonator memory, the real-time programmable memory, and the optical tracking novelty filter are reviewed. We also point to several issues that need to be addressed in order to implement more formal models of neural networks.

  15. Handheld real-time volumetric 3-D gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai

    2017-06-01

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  16. A ground moving target emergency tracking method for catastrophe rescue

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Li, D.; Li, G.

    2014-11-01

    In recent years, great disasters happen now and then. Disaster management test the emergency operation ability of the government and society all over the world. Immediately after the occurrence of a great disaster (e.g., earthquake), a massive nationwide rescue and relief operation need to be kicked off instantly. In order to improve the organizations efficiency of the emergency rescue, the organizers need to take charge of the information of the rescuer teams, including the real time location, the equipment with the team, the technical skills of the rescuers, and so on. One of the key factors for the success of emergency operations is the real time location of the rescuers dynamically. Real time tracking methods are used to track the professional rescuer teams now. But volunteers' participation play more and more important roles in great disasters. However, real time tracking of the volunteers will cause many problems, e.g., privacy leakage, expensive data consumption, etc. These problems may reduce the enthusiasm of volunteers' participation for catastrophe rescue. In fact, the great disaster is just small probability event, it is not necessary to track the volunteers (even rescuer teams) every time every day. In order to solve this problem, a ground moving target emergency tracking method for catastrophe rescue is presented in this paper. In this method, the handheld devices using GPS technology to provide the location of the users, e.g., smart phone, is used as the positioning equipment; an emergency tracking information database including the ID of the ground moving target (including the rescuer teams and volunteers), the communication number of the handheld devices with the moving target, and the usually living region, etc., is built in advance by registration; when catastrophe happens, the ground moving targets that living close to the disaster area will be filtered by the usually living region; then the activation short message will be sent to the selected ground moving target through the communication number of the handheld devices. The handheld devices receive and identify the activation short message, and send the current location information to the server. Therefore, the emergency tracking mode is triggered. The real time location of the filtered target can be shown on the organizer's screen, and the organizer can assign the rescue tasks to the rescuer teams and volunteers based on their real time location. The ground moving target emergency tracking prototype system is implemented using Oracle 11g, Visual Studio 2010 C#, Android, SMS Modem, and Google Maps API.

  17. Error analysis of real time and post processed or bit determination of GFO using GPS tracking

    NASA Technical Reports Server (NTRS)

    Schreiner, William S.

    1991-01-01

    The goal of the Navy's GEOSAT Follow-On (GFO) mission is to map the topography of the world's oceans in both real time (operational) and post processed modes. Currently, the best candidate for supplying the required orbit accuracy is the Global Positioning System (GPS). The purpose of this fellowship was to determine the expected orbit accuracy for GFO in both the real time and post-processed modes when using GPS tracking. This report presents the work completed through the ending date of the fellowship.

  18. Sequence design and software environment for real-time navigation of a wireless ferromagnetic device using MRI system and single echo 3D tracking.

    PubMed

    Chanu, A; Aboussouan, E; Tamaz, S; Martel, S

    2006-01-01

    Software architecture for the navigation of a ferromagnetic untethered device in a 1D and 2D phantom environment is briefly described. Navigation is achieved using the real-time capabilities of a Siemens 1.5 T Avanto MRI system coupled with a dedicated software environment and a specially developed 3D tracking pulse sequence. Real-time control of the magnetic core is executed through the implementation of a simple PID controller. 1D and 2D experimental results are presented.

  19. Source Tracking of Nitrous Oxide using A Quantum Cascade ...

    EPA Pesticide Factsheets

    Nitrous oxide is an important greenhouse gas and ozone depleting substance. Nitrification and denitrification are two major biological pathways that are responsible for soil emissions of N2O. However, source tracking of in-situ or laboratory N2O production is still challenging to soil scientists. The objective of this study was to introduce the use of a new technology, quantum cascade laser (QCL) spectroscopy, which allows for significantly improved accuracy and precision to continuously measure real-time N2O for source tracking. This data provides important emission inventory information to air quality and atmospheric chemistry models. The task demonstrated that QCL spectroscopy can measure the flux of nitrous oxide at ambient and well as elevated concentrations in real time. The fractionation of the nitrous oxide produced by microbial processing of nitrate can be measured and characterized as isotopic signatures related to the nitrifying or denitrifying state of the microbial communities. This has important implications for monitoring trace gases in the atmosphere. The data produced by this system will provide clients including the air quality and climate change communities with needed information on the sources and strengths of N2O emissions for modeling and research into mitigation strategies to reduce overall GHG emissions in agricultural systems.

  20. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    NASA Astrophysics Data System (ADS)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  1. Operational tracking of lava lake surface motion at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.

    2018-03-08

    Surface motion is an important component of lava lake behavior, but previous studies of lake motion have been focused on short time intervals. In this study, we implement the first continuous, real-time operational routine for tracking lava lake surface motion, applying the technique to the persistent lava lake in Halema‘uma‘u Crater at the summit of Kīlauea Volcano, Hawai‘i. We measure lake motion by using images from a fixed thermal camera positioned on the crater rim, transmitting images to the Hawaiian Volcano Observatory (HVO) in real time. We use an existing optical flow toolbox in Matlab to calculate motion vectors, and we track the position of lava upwelling in the lake, as well as the intensity of spattering on the lake surface. Over the past 2 years, real-time tracking of lava lake surface motion at Halema‘uma‘u has been an important part of monitoring the lake’s activity, serving as another valuable tool in the volcano monitoring suite at HVO.

  2. A real-time dynamic-MLC control algorithm for delivering IMRT to targets undergoing 2D rigid motion in the beam's eye view.

    PubMed

    McMahon, Ryan; Berbeco, Ross; Nishioka, Seiko; Ishikawa, Masayori; Papiez, Lech

    2008-09-01

    An MLC control algorithm for delivering intensity modulated radiation therapy (IMRT) to targets that are undergoing two-dimensional (2D) rigid motion in the beam's eye view (BEV) is presented. The goal of this method is to deliver 3D-derived fluence maps over a moving patient anatomy. Target motion measured prior to delivery is first used to design a set of planned dynamic-MLC (DMLC) sliding-window leaf trajectories. During actual delivery, the algorithm relies on real-time feedback to compensate for target motion that does not agree with the motion measured during planning. The methodology is based on an existing one-dimensional (ID) algorithm that uses on-the-fly intensity calculations to appropriately adjust the DMLC leaf trajectories in real-time during exposure delivery [McMahon et al., Med. Phys. 34, 3211-3223 (2007)]. To extend the 1D algorithm's application to 2D target motion, a real-time leaf-pair shifting mechanism has been developed. Target motion that is orthogonal to leaf travel is tracked by appropriately shifting the positions of all MLC leaves. The performance of the tracking algorithm was tested for a single beam of a fractionated IMRT treatment, using a clinically derived intensity profile and a 2D target trajectory based on measured patient data. Comparisons were made between 2D tracking, 1D tracking, and no tracking. The impact of the tracking lag time and the frequency of real-time imaging were investigated. A study of the dependence of the algorithm's performance on the level of agreement between the motion measured during planning and delivery was also included. Results demonstrated that tracking both components of the 2D motion (i.e., parallel and orthogonal to leaf travel) results in delivered fluence profiles that are superior to those that track the component of motion that is parallel to leaf travel alone. Tracking lag time effects may lead to relatively large intensity delivery errors compared to the other sources of error investigated. However, the algorithm presented is robust in the sense that it does not rely on a high level of agreement between the target motion measured during treatment planning and delivery.

  3. The Role of Visual Working Memory in Attentive Tracking of Unique Objects

    ERIC Educational Resources Information Center

    Makovski, Tal; Jiang, Yuhong V.

    2009-01-01

    When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…

  4. Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

    NASA Astrophysics Data System (ADS)

    Zhang, Wenchang; Mei, Jiangping; Ding, Yabin

    In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.

  5. Analysis of the Performance of a Laser Scanner for Predictive Automotive Applications

    NASA Astrophysics Data System (ADS)

    Zeisler, J.; Maas, H.-G.

    2015-08-01

    In this paper we evaluate the use of a laser scanner for future advanced driver assistance systems. We focus on the important task of predicting the target vehicle for longitudinal ego vehicle control. Our motivation is to decrease the reaction time of existing systems during cut-in maneuvers of other traffic participants. A state-of-the-art laser scanner, the Ibeo Scala B2 R , is presented, providing its sensing characteristics and the subsequent high level object data output. We evaluate the performance of the scanner towards object tracking with the help of a GPS real time kinematics system on a test track. Two designed scenarios show phases with constant distance and velocity as well as dynamic motion of the vehicles. We provide the results for the error in position and velocity of the scanner and furthermore, review our algorithm for target vehicle prediction. Finally we show the potential of the laser scanner with the estimated error, that leads to a decrease of up to 40% in reaction time with best conditions.

  6. Development of Waypoint Planning Tool in Response to NASA Field Campaign Challenges

    NASA Technical Reports Server (NTRS)

    He, Matt; Hardin, Danny; Mayer, Paul; Blakeslee, Richard; Goodman, Michael

    2012-01-01

    Airborne real time observations are a major component of NASA 's Earth Science research and satellite ground validation studies. Multiple aircraft are involved in most NASA field campaigns. The coordination of the aircraft with satellite overpasses, other airplanes and the constantly evolving, dynamic weather conditions often determines the success of the campaign. Planning a research aircraft mission within the context of meeting the science objectives is a complex task because it requires real time situational awareness of the weather conditions that affect the aircraft track. A flight planning tools is needed to provide situational awareness information to the mission scientists, and help them plan and modify the flight tracks. Scientists at the University of Alabama ]Huntsville and the NASA Marshall Space Flight Center developed the Waypoint Planning Tool, an interactive software tool that enables scientists to develop their own flight plans (also known as waypoints) with point -and-click mouse capabilities on a digital map filled with real time raster and vector data. The development of this Waypoint Planning Tool demonstrates the significance of mission support in responding to the challenges presented during NASA field campaigns. Analysis during and after each campaign helped identify both issues and new requirements, and initiated the next wave of development. Currently the Waypoint Planning Tool has gone through three rounds of development and analysis processes. The development of this waypoint tool is directly affected by the technology advances on GIS/Mapping technologies. From the standalone Google Earth application and simple KML functionalities, to Google Earth Plugin on web platform, and to the rising open source GIS tools with New Java Script frameworks, the Waypoint Planning Tool has entered its third phase of technology advancement. Adapting new technologies for the Waypoint Planning Tool ensures its success in helping scientist reach their mission objectives.

  7. Computer tablet-based health technology for strengthening maternal and child tracking in Bihar.

    PubMed

    Negandhi, Preeti; Chauhan, Monika; Das, Ankan Mukherjee; Sharma, Jyoti; Neogi, Sutapa; Sethy, Ghanashyam

    2016-01-01

    UNICEF along with the State Government of Bihar launched a computer tablet-based Mother and Child Tracking System (MCTS) in 2014, to capture real-time data online and to minimize the challenges faced with the conventional MCTS. The article reports the process of implementation of tablet-based MCTS in Bihar. In-depth interviews with medical officers, program managers, data managers, auxiliary nurse midwives (ANMs), and a monitoring and evaluation specialist were conducted in October 2015 to understand the process of implementation, challenges and possibility for sustainability, and scale-up of the innovation. MCTS innovation was introduced initially in one Primary Health Centre each in Gaya and Purnia districts. The device, supported with Android MCTS software and connected to a dummy server, was given to ANMs. ANMs were trained in its application. The innovation allows real-time data entry, instant uploading, and generation of day-to-day work plans for easy tracking of beneficiaries for providing in-time health-care services. The nonlinking of the dummy server to the national MCTS portal has not lessened the burden of data entry operators, who continue to enter data into the national portal as before. The innovation has been successfully implemented to meet its objective of tracking the beneficiaries. The national database should be linked to the dummy server or visible impact. The model is sustainable if the challenges can be met. Mobile technology offers a tremendous opportunity to strengthen the capacity of frontline workers and clinicians and increase the quality, completeness, and timeliness of delivery of critical health services.

  8. Measuring vigilance decrement using computer vision assisted eye tracking in dynamic naturalistic environments.

    PubMed

    Bodala, Indu P; Abbasi, Nida I; Yu Sun; Bezerianos, Anastasios; Al-Nashash, Hasan; Thakor, Nitish V

    2017-07-01

    Eye tracking offers a practical solution for monitoring cognitive performance in real world tasks. However, eye tracking in dynamic environments is difficult due to high spatial and temporal variation of stimuli, needing further and thorough investigation. In this paper, we study the possibility of developing a novel computer vision assisted eye tracking analysis by using fixations. Eye movement data is obtained from a long duration naturalistic driving experiment. Source invariant feature transform (SIFT) algorithm was implemented using VLFeat toolbox to identify multiple areas of interest (AOIs). A new measure called `fixation score' was defined to understand the dynamics of fixation position between the target AOI and the non target AOIs. Fixation score is maximum when the subjects focus on the target AOI and diminishes when they gaze at the non-target AOIs. Statistically significant negative correlation was found between fixation score and reaction time data (r =-0.2253 and p<;0.05). This implies that with vigilance decrement, the fixation score decreases due to visual attention shifting away from the target objects resulting in an increase in the reaction time.

  9. Adaptive and accelerated tracking-learning-detection

    NASA Astrophysics Data System (ADS)

    Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu

    2013-08-01

    An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.

  10. Adaptive optics with pupil tracking for high resolution retinal imaging

    PubMed Central

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-01-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577

  11. Adaptive optics with pupil tracking for high resolution retinal imaging.

    PubMed

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-02-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.

  12. Intelligent system of coordination and control for manufacturing

    NASA Astrophysics Data System (ADS)

    Ciortea, E. M.

    2016-08-01

    This paper wants shaping an intelligent system monitoring and control, which leads to optimizing material and information flows of the company. The paper presents a model for tracking and control system using intelligent real. Production system proposed for simulation analysis provides the ability to track and control the process in real time. Using simulation models be understood: the influence of changes in system structure, commands influence on the general condition of the manufacturing process conditions influence the behavior of some system parameters. Practical character consists of tracking and real-time control of the technological process. It is based on modular systems analyzed using mathematical models, graphic-analytical sizing, configuration, optimization and simulation.

  13. Real-time motion compensation for EM bronchoscope tracking with smooth output - ex-vivo validation

    NASA Astrophysics Data System (ADS)

    Reichl, Tobias; Gergel, Ingmar; Menzel, Manuela; Hautmann, Hubert; Wegner, Ingmar; Meinzer, Hans-Peter; Navab, Nassir

    2012-02-01

    Navigated bronchoscopy provides benefits for endoscopists and patients, but accurate tracking information is needed. We present a novel real-time approach for bronchoscope tracking combining electromagnetic (EM) tracking, airway segmentation, and a continuous model of output. We augment a previously published approach by including segmentation information in the tracking optimization instead of image similarity. Thus, the new approach is feasible in real-time. Since the true bronchoscope trajectory is continuous, the output is modeled using splines and the control points are optimized with respect to displacement from EM tracking measurements and spatial relation to segmented airways. Accuracy of the proposed method and its components is evaluated on a ventilated porcine ex-vivo lung with respect to ground truth data acquired from a human expert. We demonstrate the robustness of the output of the proposed method against added artificial noise in the input data. Smoothness in terms of inter-frame distance is shown to remain below 2 mm, even when up to 5 mm of Gaussian noise are added to the input. The approach is shown to be easily extensible to include other measures like image similarity.

  14. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  15. LabVIEW application for motion tracking using USB camera

    NASA Astrophysics Data System (ADS)

    Rob, R.; Tirian, G. O.; Panoiu, M.

    2017-05-01

    The technical state of the contact line and also the additional equipment in electric rail transport is very important for realizing the repairing and maintenance of the contact line. During its functioning, the pantograph motion must stay in standard limits. Present paper proposes a LabVIEW application which is able to track in real time the motion of a laboratory pantograph and also to acquire the tracking images. An USB webcam connected to a computer acquires the desired images. The laboratory pantograph contains an automatic system which simulates the real motion. The tracking parameters are the horizontally motion (zigzag) and the vertically motion which can be studied in separate diagrams. The LabVIEW application requires appropriate tool-kits for vision development. Therefore the paper describes the subroutines that are especially programmed for real-time image acquisition and also for data processing.

  16. Kinect based real-time position calibration for nasal endoscopic surgical navigation system

    NASA Astrophysics Data System (ADS)

    Fan, Jingfan; Yang, Jian; Chu, Yakui; Ma, Shaodong; Wang, Yongtian

    2016-03-01

    Unanticipated, reactive motion of the patient during skull based tumor resective surgery is the source of the consequence that the nasal endoscopic tracking system is compelled to be recalibrated. To accommodate the calibration process with patient's movement, this paper developed a Kinect based Real-time positional calibration method for nasal endoscopic surgical navigation system. In this method, a Kinect scanner was employed as the acquisition part of the point cloud volumetric reconstruction of the patient's head during surgery. Then, a convex hull based registration algorithm aligned the real-time image of the patient head with a model built upon the CT scans performed in the preoperative preparation to dynamically calibrate the tracking system if a movement was detected. Experimental results confirmed the robustness of the proposed method, presenting a total tracking error within 1 mm under the circumstance of relatively violent motions. These results point out the tracking accuracy can be retained stably and the potential to expedite the calibration of the tracking system against strong interfering conditions, demonstrating high suitability for a wide range of surgical applications.

  17. WE-G-213CD-06: Implementation of Real-Time Tumor Tracking Using Robotic Couch.

    PubMed

    Buzurovic, I; Yu, Y; Podder, T

    2012-06-01

    The purpose of this study was to present a novel method for real- time tumor tracking using a commercially available robotic treatment couch, and to evaluate tumor tracking accuracy. Commercially available robotic couches are capable of positioning patients with high level of accuracy; however, currently there is no provision for compensating tumor motion using these systems. Elekta's existing commercial couch (PreciseTM Table) was used without changing its design. To establish the real-time couch motion for tracking, a novel control system was developed and implemented. The tabletop could be moved in horizontal plane (laterally and longitudinally) using two Maxon-24V motors with gearbox combination. Vertical motion was obtained using robust 70V-Rockwell Automation motor. For vertical motor position sensing, we used Model 755A-Accu- Coder encoder. Two Baumer-ITD_01_4mm shaft encoders were used for the lateral and longitudinal motions of the couch. Motors were connected to the Advance Motion Controls (AMC) amplifiers: for the vertical motion, motor AMC-20A20-INV amplifier was used, and two AMC-Z6A8 amplifiers were applied for the lateral and longitudinal couch motions. The Galil DMC-4133 controller was connected to standard PC computer using USB port. The system had two independent power supplies: Galil PSR-12- 24-12A, 24vdc power supply with diodes for controller and 24vdc motors and amplifiers, and Galil-PS300W72 72vdc power supply for vertical motion. Control algorithms were developed for position and velocity adjustment. The system was tested for real-time tracking in the range of 50mm in all 3 directions (superior-inferior, lateral, anterior- posterior). Accuracies were 0.15, 0.20, and 0.18mm, respectively. Repeatability of the desired motion was within ± 0.2mm. Experimental results of couch tracking show feasibility of real-time tumor tracking with high level of accuracy (within sub-millimeter range). This tracking technique potentially offers a simple and effective method to minimize healthy tissues irradiation.Acknowledgement: Study supported by Elekta,Ltd. Study supported by Elekta, Ltd. © 2012 American Association of Physicists in Medicine.

  18. Accuracy assessment of the Precise Point Positioning method applied for surveys and tracking moving objects in GIS environment

    NASA Astrophysics Data System (ADS)

    Ilieva, Tamara; Gekov, Svetoslav

    2017-04-01

    The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.

  19. Real-Time Gaze Tracking for Public Displays

    NASA Astrophysics Data System (ADS)

    Sippl, Andreas; Holzmann, Clemens; Zachhuber, Doris; Ferscha, Alois

    In this paper, we explore the real-time tracking of human gazes in front of large public displays. The aim of our work is to estimate at which area of a display one ore more people are looking at a time, independently from the distance and angle to the display as well as the height of the tracked people. Gaze tracking is relevant for a variety of purposes, including the automatic recognition of the user's focus of attention, or the control of interactive applications with gaze gestures. The scope of the present paper is on the former, and we show how gaze tracking can be used for implicit interaction in the pervasive advertising domain. We have developed a prototype for this purpose, which (i) uses an overhead mounted camera to distinguish four gaze areas on a large display, (ii) works for a wide range of positions in front of the display, and (iii) provides an estimation of the currently gazed quarters in real time. A detailed description of the prototype as well as the results of a user study with 12 participants, which show the recognition accuracy for different positions in front of the display, are presented.

  20. Fast Markerless Tracking for Augmented Reality in Planar Environment

    NASA Astrophysics Data System (ADS)

    Basori, Ahmad Hoirul; Afif, Fadhil Noer; Almazyad, Abdulaziz S.; AbuJabal, Hamza Ali S.; Rehman, Amjad; Alkawaz, Mohammed Hazim

    2015-12-01

    Markerless tracking for augmented reality should not only be accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current reported methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid-based method for tracking unknown environments in markerless augmented reality. The proposed method provides collaboration of vision-based approach with accelerometers and gyroscopes sensors as camera pose predictor. To align the augmentation relative to camera motion, the tracking method is done by substituting feature-based camera estimation with combination of inertial sensors with complementary filter to provide more dynamic response. The proposed method managed to track unknown environment with faster processing time compared to available feature-based approaches. Moreover, the proposed method can sustain its estimation in a situation where feature-based tracking loses its track. The collaboration of sensor tracking managed to perform the task for about 22.97 FPS, up to five times faster than feature-based tracking method used as comparison. Therefore, the proposed method can be used to track unknown environments without depending on amount of features on scene, while requiring lower computational cost.

  1. Developing Articulated Human Models from Laser Scan Data for Use as Avatars in Real-Time Networked Virtual Environments

    DTIC Science & Technology

    2001-09-01

    structure model, motion model, physical model, and possibly many other characteristics depending on the application [Ref. 4]. While the film industry has...applications. The film industry relies on this technology almost exclusively, as it is highly reliable under controlled conditions. Since optical tracking...Wavefront. Maya has been used extensively in the film industry to provide lifelike animation, and is adept at handling 3D objects [Ref. 27]. Maya can

  2. Colour-based Object Detection and Tracking for Autonomous Quadrotor UAV

    NASA Astrophysics Data System (ADS)

    Kadouf, Hani Hunud A.; Mohd Mustafah, Yasir

    2013-12-01

    With robotics becoming a fundamental aspect of modern society, further research and consequent application is ever increasing. Aerial robotics, in particular, covers applications such as surveillance in hostile military zones or search and rescue operations in disaster stricken areas, where ground navigation is impossible. The increased visual capacity of UAV's (Unmanned Air Vehicles) is also applicable in the support of ground vehicles to provide supplies for emergency assistance, for scouting purposes or to extend communication beyond insurmountable land or water barriers. The Quadrotor, which is a small UAV has its lift generated by four rotors and can be controlled by altering the speeds of its motors relative to each other. The four rotors allow for a higher payload than single or dual rotor UAVs, which makes it safer and more suitable to carry camera and transmitter equipment. An onboard camera is used to capture and transmit images of the Quadrotor's First Person View (FPV) while in flight, in real time, wirelessly to a base station. The aim of this research is to develop an autonomous quadrotor platform capable of transmitting real time video signals to a base station for processing. The result from the image analysis will be used as a feedback in the quadrotor positioning control. To validate the system, the algorithm should have the capacity to make the quadrotor identify, track or hover above stationary or moving objects.

  3. Accessible laparoscopic instrument tracking ("InsTrac"): construct validity in a take-home box simulator.

    PubMed

    Partridge, Roland W; Hughes, Mark A; Brennan, Paul M; Hennessey, Iain A M

    2014-08-01

    Objective performance feedback has potential to maximize the training benefit of laparoscopic simulators. Instrument movement metrics are, however, currently the preserve of complex and expensive systems. We aimed to develop and validate affordable, user-ready software that provides objective feedback by tracking instrument movement in a "take-home" laparoscopic simulator. Computer-vision processing tracks the movement of colored bands placed around the distal instrument shafts. The position of each instrument is logged from the simulator camera feed and movement metrics calculated in real time. Ten novices (junior doctors) and 13 general surgery trainees (StR) (training years 3-7) performed a standardized task (threading string through hoops) on the eoSim (eoSurgical™ Ltd., Edinburgh, Scotland, United Kingdom) take-home laparoscopic simulator. Statistical analysis was performed using unpaired t tests with Welch's correction. The software was able to track the instrument tips reliably and effectively. Significant differences between the two groups were observed in time to complete task (StR versus novice, 2 minutes 33 seconds versus 9 minutes 53 seconds; P=.01), total distance traveled by instruments (3.29 m versus 11.38 m, respectively; P=.01), average instrument motion smoothness (0.15 mm/second(3) versus 0.06 mm/second(3), respectively; P<.01), and handedness (mean difference between dominant and nondominant hand) (0.55 m versus 2.43 m, respectively; P=.03). There was no significant difference seen in the distance between instrument tips, acceleration, speed of instruments, or time off-screen. We have developed software that brings objective performance feedback to the portable laparoscopic box simulator. Construct validity has been demonstrated. Removing the need for additional motion-tracking hardware makes it affordable and accessible. It is user-ready and has the potential to enhance the training benefit of portable simulators both in the workplace and at home.

  4. Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises.

    PubMed

    Cannavò, Flavio; Camacho, Antonio G; González, Pablo J; Mattia, Mario; Puglisi, Giuseppe; Fernández, José

    2015-06-09

    Volcano observatories provide near real-time information and, ultimately, forecasts about volcano activity. For this reason, multiple physical and chemical parameters are continuously monitored. Here, we present a new method to efficiently estimate the location and evolution of magmatic sources based on a stream of real-time surface deformation data, such as High-Rate GPS, and a free-geometry magmatic source model. The tool allows tracking inflation and deflation sources in time, providing estimates of where a volcano might erupt, which is important in understanding an on-going crisis. We show a successful simulated application to the pre-eruptive period of May 2008, at Mount Etna (Italy). The proposed methodology is able to track the fast dynamics of the magma migration by inverting the real-time data within seconds. This general method is suitable for integration in any volcano observatory. The method provides first order unsupervised and realistic estimates of the locations of magmatic sources and of potential eruption sites, information that is especially important for civil protection purposes.

  5. Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises

    PubMed Central

    Cannavò, Flavio; Camacho, Antonio G.; González, Pablo J.; Mattia, Mario; Puglisi, Giuseppe; Fernández, José

    2015-01-01

    Volcano observatories provide near real-time information and, ultimately, forecasts about volcano activity. For this reason, multiple physical and chemical parameters are continuously monitored. Here, we present a new method to efficiently estimate the location and evolution of magmatic sources based on a stream of real-time surface deformation data, such as High-Rate GPS, and a free-geometry magmatic source model. The tool allows tracking inflation and deflation sources in time, providing estimates of where a volcano might erupt, which is important in understanding an on-going crisis. We show a successful simulated application to the pre-eruptive period of May 2008, at Mount Etna (Italy). The proposed methodology is able to track the fast dynamics of the magma migration by inverting the real-time data within seconds. This general method is suitable for integration in any volcano observatory. The method provides first order unsupervised and realistic estimates of the locations of magmatic sources and of potential eruption sites, information that is especially important for civil protection purposes. PMID:26055494

  6. Real-Time Detection and Tracking of Multiple People in Laser Scan Frames

    NASA Astrophysics Data System (ADS)

    Cui, J.; Song, X.; Zhao, H.; Zha, H.; Shibasaki, R.

    This chapter presents an approach to detect and track multiple people ro bustly in real time using laser scan frames. The detection and tracking of people in real time is a problem that arises in a variety of different contexts. Examples in clude intelligent surveillance for security purposes, scene analysis for service robot, and crowd behavior analysis for human behavior study. Over the last several years, an increasing number of laser-based people-tracking systems have been developed in both mobile robotics platforms and fixed platforms using one or multiple laser scanners. It has been proved that processing on laser scanner data makes the tracker much faster and more robust than a vision-only based one in complex situations. In this chapter, we present a novel robust tracker to detect and track multiple people in a crowded and open area in real time. First, raw data are obtained that measures two legs for each people at a height of 16 cm from horizontal ground with multiple registered laser scanners. A stable feature is extracted using accumulated distribu tion of successive laser frames. In this way, the noise that generates split and merged measurements is smoothed well, and the pattern of rhythmic swinging legs is uti lized to extract each leg. Second, a probabilistic tracking model is presented, and then a sequential inference process using a Bayesian rule is described. A sequential inference process is difficult to compute analytically, so two strategies are presented to simplify the computation. In the case of independent tracking, the Kalman fil ter is used with a more efficient measurement likelihood model based on a region coherency property. Finally, to deal with trajectory fragments we present a concise approach to fuse just a little visual information from synchronized video camera to laser data. Evaluation with real data shows that the proposed method is robust and effective. It achieves a significant improvement compared with existing laser-based trackers.

  7. A complete system for head tracking using motion-based particle filter and randomly perturbed active contour

    NASA Astrophysics Data System (ADS)

    Bouaynaya, N.; Schonfeld, Dan

    2005-03-01

    Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.

  8. Fast Compressive Tracking.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  9. Towards real-time detection and tracking of spatio-temporal features: Blob-filaments in fusion plasma

    DOE PAGES

    Wu, Lingfei; Wu, Kesheng; Sim, Alex; ...

    2016-06-01

    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes tomore » detect and track blob-filaments in real time in fusion plasma. Here, on a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.« less

  10. Suppression of fixed pattern noise for infrared image system

    NASA Astrophysics Data System (ADS)

    Park, Changhan; Han, Jungsoo; Bae, Kyung-Hoon

    2008-04-01

    In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.

  11. Accurate estimation of influenza epidemics using Google search data via ARGO.

    PubMed

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  12. A parallelization scheme of the periodic signals tracking algorithm for isochronous mass spectrometry on GPUs

    NASA Astrophysics Data System (ADS)

    Chen, R. J.; Wang, M.; Yan, X. L.; Yang, Q.; Lam, Y. H.; Yang, L.; Zhang, Y. H.

    2017-12-01

    The periodic signals tracking algorithm has been used to determine the revolution times of ions stored in storage rings in isochronous mass spectrometry (IMS) experiments. It has been a challenge to perform real-time data analysis by using the periodic signals tracking algorithm in the IMS experiments. In this paper, a parallelization scheme of the periodic signals tracking algorithm is introduced and a new program is developed. The computing time of data analysis can be reduced by a factor of ∼71 and of ∼346 by using our new program on Tesla C1060 GPU and Tesla K20c GPU, compared to using old program on Xeon E5540 CPU. We succeed in performing real-time data analysis for the IMS experiments by using the new program on Tesla K20c GPU.

  13. ACE: Automatic Centroid Extractor for real time target tracking

    NASA Technical Reports Server (NTRS)

    Cameron, K.; Whitaker, S.; Canaris, J.

    1990-01-01

    A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.

  14. Recent experiences with implementing a video based six degree of freedom measurement system for airplane models in a 20 foot diameter vertical spin tunnel

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Jones, Stephen B.; Fremaux, Charles M.

    1993-01-01

    A model space positioning system (MSPS), a state-of-the-art, real-time tracking system to provide the test engineer with on line model pitch and spin rate information, is described. It is noted that the six-degree-of-freedom post processor program will require additional programming effort both in the automated tracking mode for high spin rates and in accuracy to meet the measurement objectives. An independent multicamera system intended to augment the MSPS is studied using laboratory calibration methods based on photogrammetry to characterize the losses in various recording options. Data acquired to Super VHS tape encoded with Vertical Interval Time Code and transcribed to video disk are considered to be a reasonable priced choice for post editing and processing video data.

  15. Method and apparatus for adaptive force and position control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.

  16. Experiments on shape perception in stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Leroy, Laure; Fuchs, Philippe; Paljic, Alexis; Moreau, Guillaume

    2009-02-01

    Stereoscopic displays are increasingly used for computer-aided design. The aim is to make virtual prototypes to avoid building real ones, so that time, money and raw materials are saved. But do we really know whether virtual displays render the objects in a realistic way to potential users? In this study, we have performed several experiments in which we compare two virtual shapes to their equivalent in the real world, each of these aiming at a specific issue by a comparison: First, we performed some perception tests to evaluate the importance of head tracking to evaluate if it is better to concentrate our efforts on stereoscopic vision; Second, we have studied the effects of interpupillary distance; Third, we studied the effects of the position of the main object in comparison with the screen. Two different tests are used, the first one using a well-known shape (a sphere) and the second one using an irregular shape but with almost the same colour and dimension. These two tests allow us to determine if symmetry is important in their perception. We show that head tracking has a more important effect on shape perception than stereoscopic vision, especially on depth perception because the subject is able to move around the scene. The study also shows that an object between the subject and the screen is perceived better than an object which is on the screen, even if the latter is better for the eye strain.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  18. Improved Space Surveillance Network (SSN) Scheduling using Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Stottler, D.

    There are close to 20,000 cataloged manmade objects in space, the large majority of which are not active, functioning satellites. These are tracked by phased array and mechanical radars and ground and space-based optical telescopes, collectively known as the Space Surveillance Network (SSN). A better SSN schedule of observations could, using exactly the same legacy sensor resources, improve space catalog accuracy through more complementary tracking, provide better responsiveness to real-time changes, better track small debris in low earth orbit (LEO) through efficient use of applicable sensors, efficiently track deep space (DS) frequent revisit objects, handle increased numbers of objects and new types of sensors, and take advantage of future improved communication and control to globally optimize the SSN schedule. We have developed a scheduling algorithm that takes as input the space catalog and the associated covariance matrices and produces a globally optimized schedule for each sensor site as to what objects to observe and when. This algorithm is able to schedule more observations with the same sensor resources and have those observations be more complementary, in terms of the precision with which each orbit metric is known, to produce a satellite observation schedule that, when executed, minimizes the covariances across the entire space object catalog. If used operationally, the results would be significantly increased accuracy of the space catalog with fewer lost objects with the same set of sensor resources. This approach inherently can also trade-off fewer high priority tasks against more lower-priority tasks, when there is benefit in doing so. Currently the project has completed a prototyping and feasibility study, using open source data on the SSN's sensors, that showed significant reduction in orbit metric covariances. The algorithm techniques and results will be discussed along with future directions for the research.

  19. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.

    PubMed

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-09-01

    To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.

  20. The case for a Supersite for real-time GNSS hazard monitoring on a global scale

    NASA Astrophysics Data System (ADS)

    Bar-Sever, Y. E.

    2017-12-01

    Real-time measurements from many hundreds of GNSS tracking sites around the world are publicly available today, and the amount of streaming data is steadily increasing as national agencies densify their local and global infrastructure for natural hazard monitoring and a variety of geodetic, cadastral, and other civil applications. Thousands of such sites can soon be expected on a global scale. It is a challenge to manage and make optimal use of this massive amount of real-time data. We advocate the creation of Supersite(s), in the parlance of the U.N. Global Earth Observation System of Systems (https://www.earthobservations.org/geoss.php), to generate high level real-time data products from the raw GNSS measurements from all available sources (many thousands of sites). These products include: • High rate, real-time positioning time series for assessing rapid crustal motion due to Earthquakes, volcanic activities, land slides, etc. • Co-seismic displacement to help resolve earthquake mechanism and moment magnitude • Real-time total electron content (TEC) fluctuations to augment Dart buoy in detecting and tracking tsunamis • Aggregation of the many disparate raw data dispensation servers (Casters)Recognizing that natural hazards transcend national boundaries in terms of direct and indirect (e.g., economical, security) impact, the benefits from centralized, authoritative processing of GNSS measurements is manifold: • Offers a one-stop shop to less developed nations and institutions for raw and high-level products, in support of research and applications • Promotes the installation of tracking sites and the contribution of data from nations without the ability to process the data • Reduce dependency on local responsible agencies impacted by a natural disaster • Reliable 24/7 operations, independent of voluntary, best effort contributions from good-willing scientific organizationsThe JPL GNSS Real-Time Earthquake and Tsunami (GREAT) Alert has been operating as a prototype for such a Supersite for nearly a decade, processing in real-time data from hundreds of global and regional GNSS tracking sites. The existing operational infrastructure, complete self-sufficiency, and proven reliability can be leveraged at low cost to provide valuable natural hazard monitoring to the U.S. and the world.

  1. Results of field testing with the FightSight infrared-based projectile tracking and weapon-fire characterization technology

    NASA Astrophysics Data System (ADS)

    Snarski, Steve; Menozzi, Alberico; Sherrill, Todd; Volpe, Chris; Wille, Mark

    2010-04-01

    This paper describes experimental results from recent live-fire data collects that demonstrate the capability of a prototype system for projectile detection and tracking. This system, which is being developed at Applied Research Associates, Inc., under the FightSight program, consists of a high-speed thermal camera and sophisticated image processing algorithms to detect and track projectiles. The FightSight operational vision is automated situational intelligence to detect, track, and graphically map large-scale firefights and individual shooting events onto command and control (C2) systems in real time (shot location and direction, weapon ID, movements and trends). Gaining information on enemy-fire trajectories allows educated inferences on the enemy's intent, disposition, and strength. Our prototype projectile detection and tracking system has been tested at the Joint Readiness Training Center (Ft Polk, LA) during live-fire convoy and mortar registration exercises, in the summer of 2009. It was also tested during staged military-operations- on-urban-terrain (MOUT) firefight events at Aberdeen Test Center (Aberdeen, MD) under the Hostile Fire Defeat Army Technology Objective midterm experiment, also in the summer of 2009, where we introduced fusion with acoustic and EO sensors to provide 3D localization and near-real time display of firing events. Results are presented in this paper that demonstrate effective and accurate detection and localization of weapon fire (5.56mm, 7.62mm, .50cal, 81/120mm mortars, 40mm) in diverse and challenging environments (dust, heat, day and night, rain, arid open terrain, urban clutter). FightSight's operational capabilities demonstrated under these live-fire data collects can support closecombat scenarios. As development continues, FightSight will be able to feed C2 systems with a symbolic map of enemy actions.

  2. Real-time multiple human perception with color-depth cameras on a mobile robot.

    PubMed

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.

  3. Real-Time Atmospheric Phase Fluctuation Correction Using a Phased Array of Widely Separated Antennas: X-Band Results and Ka-Band Progress

    NASA Astrophysics Data System (ADS)

    Geldzahler, B.; Birr, R.; Brown, R.; Grant, K.; Hoblitzell, R.; Miller, M.; Woods, G.; Argueta, A.; Ciminera, M.; Cornish, T.; D'Addario, L.; Davarian, F.; Kocz, J.; Lee, D.; Morabito, D.; Tsao, P.; Jakeman-Flores, H.; Ott, M.; Soloff, J.; Denn, G.; Church, K.; Deffenbaugh, P.

    2016-09-01

    NASA is pursuing a demonstration of coherent uplink arraying at 7.145-7.190 GHz (X-band) and 30-31 GHz (Kaband) using three 12m diameter COTS antennas separated by 60m at the Kennedy Space Center in Florida. In addition, we have used up to three 34m antennas separated by 250m at the Goldstone Deep Space Communication Complex in California at X-band 7.1 GHz incorporating real-time correction for tropospheric phase fluctuations. Such a demonstration can enable NASA to design and establish a high power, high resolution, 24/7 availability radar system for (a) tracking and characterizing observations of Near Earth Objects (NEOs), (b) tracking, characterizing and determining the statistics of small-scale (≤10cm) orbital debris, (c) incorporating the capability into its space communication and navigation tracking stations for emergency spacecraft commanding in the Ka band era which NASA is entering, and (d) fielding capabilities of interest to other US government agencies. We present herein the results of our phased array uplink combining at near 7.17 and 8.3 GHz using widely separated antennas demonstrations at both locales, the results of a study to upgrade from a communication to a radar system, and our vision for going forward in implementing a high performance, low lifecycle cost multi-element radar array.

  4. Real-time WAMI streaming target tracking in fog

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Blasch, Erik; Chen, Ning; Deng, Anna; Ling, Haibin; Chen, Genshe

    2016-05-01

    Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and Text data is highly desired for many mission critical emergency or security applications. Cloud Computing has been considered promising to achieve big data integration from multi-modal sources. In many mission critical tasks, however, powerful Cloud technology cannot satisfy the tight latency tolerance as the servers are allocated far from the sensing platform, actually there is no guaranteed connection in the emergency situations. Therefore, data processing, information fusion, and decision making are required to be executed on-site (i.e., near the data collection). Fog Computing, a recently proposed extension and complement for Cloud Computing, enables computing on-site without outsourcing jobs to a remote Cloud. In this work, we have investigated the feasibility of processing streaming WAMI in the Fog for real-time, online, uninterrupted target tracking. Using a single target tracking algorithm, we studied the performance of a Fog Computing prototype. The experimental results are very encouraging that validated the effectiveness of our Fog approach to achieve real-time frame rates.

  5. Real-time path planning and autonomous control for helicopter autorotation

    NASA Astrophysics Data System (ADS)

    Yomchinda, Thanan

    Autorotation is a descending maneuver that can be used to recover helicopters in the event of total loss of engine power; however it is an extremely difficult and complex maneuver. The objective of this work is to develop a real-time system which provides full autonomous control for autorotation landing of helicopters. The work includes the development of an autorotation path planning method and integration of the path planner with a primary flight control system. The trajectory is divided into three parts: entry, descent and flare. Three different optimization algorithms are used to generate trajectories for each of these segments. The primary flight control is designed using a linear dynamic inversion control scheme, and a path following control law is developed to track the autorotation trajectories. Details of the path planning algorithm, trajectory following control law, and autonomous autorotation system implementation are presented. The integrated system is demonstrated in real-time high fidelity simulations. Results indicate feasibility of the capability of the algorithms to operate in real-time and of the integrated systems ability to provide safe autorotation landings. Preliminary simulations of autonomous autorotation on a small UAV are presented which will lead to a final hardware demonstration of the algorithms.

  6. Real-Time Charging Strategies for an Electric Vehicle Aggregator to Provide Ancillary Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, George; Negrete-Pincetic, Matias; Olivares, Daniel E.

    Real-time charging strategies, in the context of vehicle to grid (V2G) technology, are needed to enable the use of electric vehicle (EV) fleets batteries to provide ancillary services (AS). Here, we develop tools to manage charging and discharging in a fleet to track an Automatic Generation Control (AGC) signal when aggregated. We also propose a real-time controller that considers bidirectional charging efficiency and extend it to study the effect of looking ahead when implementing Model Predictive Control (MPC). Simulations show that the controller improves tracking error as compared with benchmark scheduling algorithms, as well as regulation capacity and battery cycling.

  7. Real-Time Charging Strategies for an Electric Vehicle Aggregator to Provide Ancillary Services

    DOE PAGES

    Wenzel, George; Negrete-Pincetic, Matias; Olivares, Daniel E.; ...

    2017-03-13

    Real-time charging strategies, in the context of vehicle to grid (V2G) technology, are needed to enable the use of electric vehicle (EV) fleets batteries to provide ancillary services (AS). Here, we develop tools to manage charging and discharging in a fleet to track an Automatic Generation Control (AGC) signal when aggregated. We also propose a real-time controller that considers bidirectional charging efficiency and extend it to study the effect of looking ahead when implementing Model Predictive Control (MPC). Simulations show that the controller improves tracking error as compared with benchmark scheduling algorithms, as well as regulation capacity and battery cycling.

  8. A secure mobile crowdsensing (MCS) location tracker for elderly in smart city

    NASA Astrophysics Data System (ADS)

    Shien, Lau Khai; Singh, Manmeet Mahinderjit

    2017-10-01

    According to the UN's (United Nations) projection, Malaysia will achieve ageing population status by 2030. The challenge of the growing ageing population is health and social care services. As the population lives longer, the costs of institutional care rises and elderly who not able live independently in their own homes without caregivers. Moreover, it restricted their activity area, safety and freedom in their daily life. Hence, a tracking system is worthy for their caregivers to track their real-time location with efficient. Currently tracking and monitoring systems are unable to satisfy the needs of the community. Hence, Indoor-Outdoor Elderly Secure and Tracking care system (IOET) proposed to track and monitor elderly. This Mobile Crowdsensing type of system is using indoor and outdoor positioning system to locate elder which utilizes the RFID, NFC, biometric system and GPS aim to secure the safety of elderly within indoors and outdoors environment. A mobile application and web-based application to be designed for this system. This system able to real-time tracking by combining GPS and NFC for outdoor coverage where ideally in smart city. In indoor coverage, the system utilizes active RFID tracking elderly movement. The system will prompt caregiver wherever elderly movement or request by using the notification service which provided the real-time notify. Caregiver also can review the place that visited by elderly and trace back elderly movement.

  9. Active eye-tracking for an adaptive optics scanning laser ophthalmoscope

    PubMed Central

    Sheehy, Christy K.; Tiruveedhula, Pavan; Sabesan, Ramkumar; Roorda, Austin

    2015-01-01

    We demonstrate a system that combines a tracking scanning laser ophthalmoscope (TSLO) and an adaptive optics scanning laser ophthalmoscope (AOSLO) system resulting in both optical (hardware) and digital (software) eye-tracking capabilities. The hybrid system employs the TSLO for active eye-tracking at a rate up to 960 Hz for real-time stabilization of the AOSLO system. AOSLO videos with active eye-tracking signals showed, at most, an amplitude of motion of 0.20 arcminutes for horizontal motion and 0.14 arcminutes for vertical motion. Subsequent real-time digital stabilization limited residual motion to an average of only 0.06 arcminutes (a 95% reduction). By correcting for high amplitude, low frequency drifts of the eye, the active TSLO eye-tracking system enabled the AOSLO system to capture high-resolution retinal images over a larger range of motion than previously possible with just the AOSLO imaging system alone. PMID:26203370

  10. [Research on fuzzy proportional-integral-derivative control of master-slave minimally invasive operation robot driver].

    PubMed

    Zhao, Ximei; Ren, Chengyi; Liu, Hao; Li, Haogyi

    2014-12-01

    Robotic catheter minimally invasive operation requires that the driver control system has the advantages of quick response, strong anti-jamming and real-time tracking of target trajectory. Since the catheter parameters of itself and movement environment and other factors continuously change, when the driver is controlled using traditional proportional-integral-derivative (PID), the controller gain becomes fixed once the PID parameters are set. It can not change with the change of the parameters of the object and environmental disturbance so that its change affects the position tracking accuracy, and may bring a large overshoot endangering patients' vessel. Therefore, this paper adopts fuzzy PID control method to adjust PID gain parameters in the tracking process in order to improve the system anti-interference ability, dynamic performance and tracking accuracy. The simulation results showed that the fuzzy PID control method had a fast tracking performance and a strong robustness. Compared with those of traditional PID control, the feasibility and practicability of fuzzy PID control are verified in a robotic catheter minimally invasive operation.

  11. Tracking Objects with Networked Scattered Directional Sensors

    NASA Astrophysics Data System (ADS)

    Plarre, Kurt; Kumar, P. R.

    2007-12-01

    We study the problem of object tracking using highly directional sensors—sensors whose field of vision is a line or a line segment. A network of such sensors monitors a certain region of the plane. Sporadically, objects moving in straight lines and at a constant speed cross the region. A sensor detects an object when it crosses its line of sight, and records the time of the detection. No distance or angle measurements are available. The task of the sensors is to estimate the directions and speeds of the objects, and the sensor lines, which are unknown a priori. This estimation problem involves the minimization of a highly nonconvex cost function. To overcome this difficulty, we introduce an algorithm, which we call "adaptive basis algorithm." This algorithm is divided into three phases: in the first phase, the algorithm is initialized using data from six sensors and four objects; in the second phase, the estimates are updated as data from more sensors and objects are incorporated. The third phase is an optional coordinated transformation. The estimation is done in an "ad-hoc" coordinate system, which we call "adaptive coordinate system." When more information is available, for example, the location of six sensors, the estimates can be transformed to the "real-world" coordinate system. This constitutes the third phase.

  12. A benchmark for comparison of cell tracking algorithms

    PubMed Central

    Maška, Martin; Ulman, Vladimír; Svoboda, David; Matula, Pavel; Matula, Petr; Ederra, Cristina; Urbiola, Ainhoa; España, Tomás; Venkatesan, Subramanian; Balak, Deepak M.W.; Karas, Pavel; Bolcková, Tereza; Štreitová, Markéta; Carthel, Craig; Coraluppi, Stefano; Harder, Nathalie; Rohr, Karl; Magnusson, Klas E. G.; Jaldén, Joakim; Blau, Helen M.; Dzyubachyk, Oleh; Křížek, Pavel; Hagen, Guy M.; Pastor-Escuredo, David; Jimenez-Carretero, Daniel; Ledesma-Carbayo, Maria J.; Muñoz-Barrutia, Arrate; Meijering, Erik; Kozubek, Michal; Ortiz-de-Solorzano, Carlos

    2014-01-01

    Motivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this article, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. Results: The main contributions of the challenge include the creation of a comprehensive video dataset repository and the definition of objective measures for comparison and ranking of the algorithms. With this benchmark, six algorithms covering a variety of segmentation and tracking paradigms have been compared and ranked based on their performance on both synthetic and real datasets. Given the diversity of the datasets, we do not declare a single winner of the challenge. Instead, we present and discuss the results for each individual dataset separately. Availability and implementation: The challenge Web site (http://www.codesolorzano.com/celltrackingchallenge) provides access to the training and competition datasets, along with the ground truth of the training videos. It also provides access to Windows and Linux executable files of the evaluation software and most of the algorithms that competed in the challenge. Contact: codesolorzano@unav.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24526711

  13. Accurate estimation of influenza epidemics using Google search data via ARGO

    PubMed Central

    Yang, Shihao; Santillana, Mauricio; Kou, S. C.

    2015-01-01

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980

  14. ITC/USA/'90; Proceedings of the International Telemetering Conference, Las Vegas, NV, Oct. 29-Nov. 2, 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-01-01

    This conference presents papers in the fields of airborne telemetry, measurement technology, video instrumentation and monitoring, tracking and receiving systems, and real-time processing in telemetry. Topics presented include packet telemetry ground station simulation, a predictable performance wideband noise generator, an improved drone tracking control system transponder, the application of neural networks to drone control, and an integrated real-time turbine engine flight test system.

  15. Achieving Real-Time Tracking Mobile Wireless Sensors Using SE-KFA

    NASA Astrophysics Data System (ADS)

    Kadhim Hoomod, Haider, Dr.; Al-Chalabi, Sadeem Marouf M.

    2018-05-01

    Nowadays, Real-Time Achievement is very important in different fields, like: Auto transport control, some medical applications, celestial body tracking, controlling agent movements, detections and monitoring, etc. This can be tested by different kinds of detection devices, which named "sensors" as such as: infrared sensors, ultrasonic sensor, radars in general, laser light sensor, and so like. Ultrasonic Sensor is the most fundamental one and it has great impact and challenges comparing with others especially when navigating (as an agent). In this paper, concerning to the ultrasonic sensor, sensor(s) detecting and delimitation by themselves then navigate inside a limited area to estimating Real-Time using Speed Equation with Kalman Filter Algorithm as an intelligent estimation algorithm. Then trying to calculate the error comparing to the factual rate of tracking. This paper used Ultrasonic Sensor HC-SR04 with Arduino-UNO as Microcontroller.

  16. Prototype of a single probe Compton camera for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Koyama, A.; Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Sakuma, I.

    2017-02-01

    Image-guided surgery (IGS) is performed using a real-time surgery navigation system with three-dimensional (3D) position tracking of surgical tools. IGS is fast becoming an important technology for high-precision laparoscopic surgeries, in which the field of view is limited. In particular, recent developments in intraoperative imaging using radioactive biomarkers may enable advanced IGS for supporting malignant tumor removal surgery. In this light, we develop a novel intraoperative probe with a Compton camera and a position tracking system for performing real-time radiation-guided surgery. A prototype probe consisting of Ce :Gd3 Al2 Ga3 O12 (GAGG) crystals and silicon photomultipliers was fabricated, and its reconstruction algorithm was optimized to enable real-time position tracking. The results demonstrated the visualization capability of the radiation source with ARM = ∼ 22.1 ° and the effectiveness of the proposed system.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Michael; Nemati, Bijan; Zhai, Chengxing

    We present an approach that significantly increases the sensitivity for finding and tracking small and fast near-Earth asteroids (NEAs). This approach relies on a combined use of a new generation of high-speed cameras which allow short, high frame-rate exposures of moving objects, effectively 'freezing' their motion, and a computationally enhanced implementation of the 'shift-and-add' data processing technique that helps to improve the signal-to-noise ratio (SNR) for detection of NEAs. The SNR of a single short exposure of a dim NEA is insufficient to detect it in one frame, but by computationally searching for an appropriate velocity vector, shifting successive framesmore » relative to each other and then co-adding the shifted frames in post-processing, we synthetically create a long-exposure image as if the telescope were tracking the object. This approach, which we call 'synthetic tracking,' enhances the familiar shift-and-add technique with the ability to do a wide blind search, detect, and track dim and fast-moving NEAs in near real time. We discuss also how synthetic tracking improves the astrometry of fast-moving NEAs. We apply this technique to observations of two known asteroids conducted on the Palomar 200 inch telescope and demonstrate improved SNR and 10 fold improvement of astrometric precision over the traditional long-exposure approach. In the past 5 yr, about 150 NEAs with absolute magnitudes H = 28 (∼10 m in size) or fainter have been discovered. With an upgraded version of our camera and a field of view of (28 arcmin){sup 2} on the Palomar 200 inch telescope, synthetic tracking could allow detecting up to 180 such objects per night, including very small NEAs with sizes down to 7 m.« less

  18. Human tracking in thermal images using adaptive particle filters with online random forest learning

    NASA Astrophysics Data System (ADS)

    Ko, Byoung Chul; Kwak, Joon-Young; Nam, Jae-Yeal

    2013-11-01

    This paper presents a fast and robust human tracking method to use in a moving long-wave infrared thermal camera under poor illumination with the existence of shadows and cluttered backgrounds. To improve the human tracking performance while minimizing the computation time, this study proposes an online learning of classifiers based on particle filters and combination of a local intensity distribution (LID) with oriented center-symmetric local binary patterns (OCS-LBP). Specifically, we design a real-time random forest (RF), which is the ensemble of decision trees for confidence estimation, and confidences of the RF are converted into a likelihood function of the target state. First, the target model is selected by the user and particles are sampled. Then, RFs are generated using the positive and negative examples with LID and OCS-LBP features by online learning. The learned RF classifiers are used to detect the most likely target position in the subsequent frame in the next stage. Then, the RFs are learned again by means of fast retraining with the tracked object and background appearance in the new frame. The proposed algorithm is successfully applied to various thermal videos as tests and its tracking performance is better than those of other methods.

  19. WiFi RFID demonstration for resource tracking in a statewide disaster drill.

    PubMed

    Cole, Stacey L; Siddiqui, Javeed; Harry, David J; Sandrock, Christian E

    2011-01-01

    To investigate the capabilities of Radio Frequency Identification (RFID) tracking of patients and medical equipment during a simulated disaster response scenario. RFID infrastructure was deployed at two small rural hospitals, in one large academic medical center and in two vehicles. Several item types from the mutual aid equipment list were selected for tracking during the demonstration. A central database server was installed at the UC Davis Medical Center (UCDMC) that collected RFID information from all constituent sites. The system was tested during a statewide disaster drill. During the drill, volunteers at UCDMC were selected to locate assets using the traditional method of locating resources and then using the RFID system. This study demonstrated the effectiveness of RFID infrastructure in real-time resource identification and tracking. Volunteers at UCDMC were able to locate assets substantially faster using RFID, demonstrating that real-time geolocation can be substantially more efficient and accurate than traditional manual methods. A mobile, Global Positioning System (GPS)-enabled RFID system was installed in a pediatric ambulance and connected to the central RFID database via secure cellular communication. This system is unique in that it provides for seamless region-wide tracking that adaptively uses and seamlessly integrates both outdoor cellular-based mobile tracking and indoor WiFi-based tracking. RFID tracking can provide a real-time picture of the medical situation across medical facilities and other critical locations, leading to a more coordinated deployment of resources. The RFID system deployed during this study demonstrated the potential to improve the ability to locate and track victims, healthcare professionals, and medical equipment during a region-wide disaster.

  20. Tangible imaging systems

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2013-03-01

    We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.

  1. Model-based registration of multi-rigid-body for augmented reality

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro

    2009-02-01

    Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.

  2. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  3. Privacy-protecting video surveillance

    NASA Astrophysics Data System (ADS)

    Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2005-02-01

    Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.

  4. A RSSI-based parameter tracking strategy for constrained position localization

    NASA Astrophysics Data System (ADS)

    Du, Jinze; Diouris, Jean-François; Wang, Yide

    2017-12-01

    In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.

  5. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery

    PubMed Central

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-01-01

    Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146

  6. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  7. Adaptive object tracking via both positive and negative models matching

    NASA Astrophysics Data System (ADS)

    Li, Shaomei; Gao, Chao; Wang, Yawen

    2015-03-01

    To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as abinary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm can not only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.

  8. Microsoft kinect-based artificial perception system for control of functional electrical stimulation assisted grasping.

    PubMed

    Strbac, Matija; Kočović, Slobodan; Marković, Marko; Popović, Dejan B

    2014-01-01

    We present a computer vision algorithm that incorporates a heuristic model which mimics a biological control system for the estimation of control signals used in functional electrical stimulation (FES) assisted grasping. The developed processing software acquires the data from Microsoft Kinect camera and implements real-time hand tracking and object analysis. This information can be used to identify temporal synchrony and spatial synergies modalities for FES control. Therefore, the algorithm acts as artificial perception which mimics human visual perception by identifying the position and shape of the object with respect to the position of the hand in real time during the planning phase of the grasp. This artificial perception used within the heuristically developed model allows selection of the appropriate grasp and prehension. The experiments demonstrate that correct grasp modality was selected in more than 90% of tested scenarios/objects. The system is portable, and the components are low in cost and robust; hence, it can be used for the FES in clinical or even home environment. The main application of the system is envisioned for functional electrical therapy, that is, intensive exercise assisted with FES.

  9. Microsoft Kinect-Based Artificial Perception System for Control of Functional Electrical Stimulation Assisted Grasping

    PubMed Central

    Kočović, Slobodan; Popović, Dejan B.

    2014-01-01

    We present a computer vision algorithm that incorporates a heuristic model which mimics a biological control system for the estimation of control signals used in functional electrical stimulation (FES) assisted grasping. The developed processing software acquires the data from Microsoft Kinect camera and implements real-time hand tracking and object analysis. This information can be used to identify temporal synchrony and spatial synergies modalities for FES control. Therefore, the algorithm acts as artificial perception which mimics human visual perception by identifying the position and shape of the object with respect to the position of the hand in real time during the planning phase of the grasp. This artificial perception used within the heuristically developed model allows selection of the appropriate grasp and prehension. The experiments demonstrate that correct grasp modality was selected in more than 90% of tested scenarios/objects. The system is portable, and the components are low in cost and robust; hence, it can be used for the FES in clinical or even home environment. The main application of the system is envisioned for functional electrical therapy, that is, intensive exercise assisted with FES. PMID:25202707

  10. A fiducial detection algorithm for real-time image guided IMRT based on simultaneous MV and kV imaging

    PubMed Central

    Mao, Weihua; Riaz, Nadeem; Lee, Louis; Wiersma, Rodney; Xing, Lei

    2008-01-01

    The advantage of highly conformal dose techniques such as 3DCRT and IMRT is limited by intrafraction organ motion. A new approach to gain near real-time 3D positions of internally implanted fiducial markers is to analyze simultaneous onboard kV beam and treatment MV beam images (from fluoroscopic or electronic portal image devices). Before we can use this real-time image guidance for clinical 3DCRT and IMRT treatments, four outstanding issues need to be addressed. (1) How will fiducial motion blur the image and hinder tracking fiducials? kV and MV images are acquired while the tumor is moving at various speeds. We find that a fiducial can be successfully detected at a maximum linear speed of 1.6 cm∕s. (2) How does MV beam scattering affect kV imaging? We investigate this by varying MV field size and kV source to imager distance, and find that common treatment MV beams do not hinder fiducial detection in simultaneous kV images. (3) How can one detect fiducials on images from 3DCRT and IMRT treatment beams when the MV fields are modified by a multileaf collimator (MLC)? The presented analysis is capable of segmenting a MV field from the blocking MLC and detecting visible fiducials. This enables the calculation of nearly real-time 3D positions of markers during a real treatment. (4) Is the analysis fast enough to track fiducials in nearly real time? Multiple methods are adopted to predict marker positions and reduce search regions. The average detection time per frame for three markers in a 1024×768 image was reduced to 0.1 s or less. Solving these four issues paves the way to tracking moving fiducial markers throughout a 3DCRT or IMRT treatment. Altogether, these four studies demonstrate that our algorithm can track fiducials in real time, on degraded kV images (MV scatter), in rapidly moving tumors (fiducial blurring), and even provide useful information in the case when some fiducials are blocked from view by the MLC. This technique can provide a gating signal or be used for intra-fractional tumor tracking on a Linac equipped with a kV imaging system. Any motion exceeding a preset threshold can warn the therapist to suspend a treatment session and reposition the patient. PMID:18777916

  11. Quantitative analysis of the improvement in omnidirectional maritime surveillance and tracking due to real-time image enhancement

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.

    2011-05-01

    Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.

  12. CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.

    PubMed

    Bray, Mark-Anthony; Carpenter, Anne E

    2015-11-04

    Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.

  13. Real-time auto-adaptive margin generation for MLC-tracked radiotherapy

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; Fast, M. F.; de Senneville, B. Denis; Nill, S.; Oelfke, U.; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2017-01-01

    In radiotherapy, abdominal and thoracic sites are candidates for performing motion tracking. With real-time control it is possible to adjust the multileaf collimator (MLC) position to the target position. However, positions are not perfectly matched and position errors arise from system delays and complicated response of the electromechanic MLC system. Although, it is possible to compensate parts of these errors by using predictors, residual errors remain and need to be compensated to retain target coverage. This work presents a method to statistically describe tracking errors and to automatically derive a patient-specific, per-segment margin to compensate the arising underdosage on-line, i.e. during plan delivery. The statistics of the geometric error between intended and actual machine position are derived using kernel density estimators. Subsequently a margin is calculated on-line according to a selected coverage parameter, which determines the amount of accepted underdosage. The margin is then applied onto the actual segment to accommodate the positioning errors in the enlarged segment. The proof-of-concept was tested in an on-line tracking experiment and showed the ability to recover underdosages for two test cases, increasing {{V}90 %} in the underdosed area about 47 % and 41 % , respectively. The used dose model was able to predict the loss of dose due to tracking errors and could be used to infer the necessary margins. The implementation had a running time of 23 ms which is compatible with real-time requirements of MLC tracking systems. The auto-adaptivity to machine and patient characteristics makes the technique a generic yet intuitive candidate to avoid underdosages due to MLC tracking errors.

  14. Toward the development of intrafraction tumor deformation tracking using a dynamic multi-leaf collimator

    PubMed Central

    Ge, Yuanyuan; O’Brien, Ricky T.; Shieh, Chun-Chien; Booth, Jeremy T.; Keall, Paul J.

    2014-01-01

    Purpose: Intrafraction deformation limits targeting accuracy in radiotherapy. Studies show tumor deformation of over 10 mm for both single tumor deformation and system deformation (due to differential motion between primary tumors and involved lymph nodes). Such deformation cannot be adapted to with current radiotherapy methods. The objective of this study was to develop and experimentally investigate the ability of a dynamic multi-leaf collimator (DMLC) tracking system to account for tumor deformation. Methods: To compensate for tumor deformation, the DMLC tracking strategy is to warp the planned beam aperture directly to conform to the new tumor shape based on real time tumor deformation input. Two deformable phantoms that correspond to a single tumor and a tumor system were developed. The planar deformations derived from the phantom images in beam's eye view were used to guide the aperture warping. An in-house deformable image registration software was developed to automatically trigger the registration once new target image was acquired and send the computed deformation to the DMLC tracking software. Because the registration speed is not fast enough to implement the experiment in real-time manner, the phantom deformation only proceeded to the next position until registration of the current deformation position was completed. The deformation tracking accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the ideal aperture. The individual contributions from the deformable registration algorithm and the finite leaf width to the tracking uncertainty were analyzed. Clinical proof-of-principle experiment of deformation tracking using previously acquired MR images of a lung cancer patient was implemented to represent the MRI-Linac environment. Intensity-modulated radiation therapy (IMRT) treatment delivered with enabled deformation tracking was simulated and demonstrated. Results: The first experimental investigation of adapting to tumor deformation has been performed using simple deformable phantoms. For the single tumor deformation, the Au+Ao was reduced over 56% when deformation was larger than 2 mm. Overall, the total improvement was 82%. For the tumor system deformation, the Au+Ao reductions were all above 75% and the total Au+Ao improvement was 86%. Similar coverage improvement was also found in simulating deformation tracking during IMRT delivery. The deformable image registration algorithm was identified as the dominant contributor to the tracking error rather than the finite leaf width. The discrepancy between the warped beam shape and the ideal beam shape due to the deformable registration was observed to be partially compensated during leaf fitting due to the finite leaf width. The clinical proof-of-principle experiment demonstrated the feasibility of intrafraction deformable tracking for clinical scenarios. Conclusions: For the first time, we developed and demonstrated an experimental system that is capable of adapting the MLC aperture to account for tumor deformation. This work provides a potentially widely available management method to effectively account for intrafractional tumor deformation. This proof-of-principle study is the first experimental step toward the development of an image-guided radiotherapy system to treat deforming tumors in real-time. PMID:24877798

  15. Real-time open-loop frequency response analysis of flight test data

    NASA Technical Reports Server (NTRS)

    Bosworth, J. T.; West, J. C.

    1986-01-01

    A technique has been developed to compare the open-loop frequency response of a flight test aircraft real time with linear analysis predictions. The result is direct feedback to the flight control systems engineer on the validity of predictions and adds confidence for proceeding with envelope expansion. Further, gain and phase margins can be tracked for trends in a manner similar to the techniques used by structural dynamics engineers in tracking structural modal damping.

  16. Strain measurement of abdominal aortic aneurysm with real-time 3D ultrasound speckle tracking.

    PubMed

    Bihari, P; Shelke, A; Nwe, T H; Mularczyk, M; Nelson, K; Schmandra, T; Knez, P; Schmitz-Rixen, T

    2013-04-01

    Abdominal aortic aneurysm rupture is caused by mechanical vascular tissue failure. Although mechanical properties within the aneurysm vary, currently available ultrasound methods assess only one cross-sectional segment of the aorta. This study aims to establish real-time 3-dimensional (3D) speckle tracking ultrasound to explore local displacement and strain parameters of the whole abdominal aortic aneurysm. Validation was performed on a silicone aneurysm model, perfused in a pulsatile artificial circulatory system. Wall motion of the silicone model was measured simultaneously with a commercial real-time 3D speckle tracking ultrasound system and either with laser-scan micrometry or with video photogrammetry. After validation, 3D ultrasound data were collected from abdominal aortic aneurysms of five patients and displacement and strain parameters were analysed. Displacement parameters measured in vitro by 3D ultrasound and laser scan micrometer or video analysis were significantly correlated at pulse pressures between 40 and 80 mmHg. Strong local differences in displacement and strain were identified within the aortic aneurysms of patients. Local wall strain of the whole abdominal aortic aneurysm can be analysed in vivo with real-time 3D ultrasound speckle tracking imaging, offering the prospect of individual non-invasive rupture risk analysis of abdominal aortic aneurysms. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  17. Real-time image-processing algorithm for markerless tumour tracking using X-ray fluoroscopic imaging.

    PubMed

    Mori, S

    2014-05-01

    To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior-inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Our image-processing algorithm might be useful in improving gated-treatment accuracy.

  18. Real-time inference of word relevance from electroencephalogram and eye gaze

    NASA Astrophysics Data System (ADS)

    Wenzel, M. A.; Bogojeski, M.; Blankertz, B.

    2017-10-01

    Objective. Brain-computer interfaces can potentially map the subjective relevance of the visual surroundings, based on neural activity and eye movements, in order to infer the interest of a person in real-time. Approach. Readers looked for words belonging to one out of five semantic categories, while a stream of words passed at different locations on the screen. It was estimated in real-time which words and thus which semantic category interested each reader based on the electroencephalogram (EEG) and the eye gaze. Main results. Words that were subjectively relevant could be decoded online from the signals. The estimation resulted in an average rank of 1.62 for the category of interest among the five categories after a hundred words had been read. Significance. It was demonstrated that the interest of a reader can be inferred online from EEG and eye tracking signals, which can potentially be used in novel types of adaptive software, which enrich the interaction by adding implicit information about the interest of the user to the explicit interaction. The study is characterised by the following novelties. Interpretation with respect to the word meaning was necessary in contrast to the usual practice in brain-computer interfacing where stimulus recognition is sufficient. The typical counting task was avoided because it would not be sensible for implicit relevance detection. Several words were displayed at the same time, in contrast to the typical sequences of single stimuli. Neural activity was related with eye tracking to the words, which were scanned without restrictions on the eye movements.

  19. Respiratory-Induced Prostate Motion Using Wavelet Decomposition of the Real-Time Electromagnetic Tracking Signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yuting; Liu, Tian; Yang, Xiaofeng

    2013-10-01

    Purpose: The objective of this work is to characterize and quantify the impact of respiratory-induced prostate motion. Methods and Materials: Real-time intrafraction motion is observed with the Calypso 4-dimensional nonradioactive electromagnetic tracking system (Calypso Medical Technologies, Inc. Seattle, Washington). We report the results from a total of 1024 fractions from 31 prostate cancer patients. Wavelet transform was used to decompose the signal to extract and isolate the respiratory-induced prostate motion from the total prostate displacement. Results: Our results show that the average respiratory motion larger than 0.5 mm can be observed in 68% of the fractions. Fewer than 1% ofmore » the patients showed average respiratory motion of less than 0.2 mm, whereas 99% of the patients showed average respiratory-induced motion ranging between 0.2 and 2 mm. The maximum respiratory range of motion of 3 mm or greater was seen in only 25% of the fractions. In addition, about 2% patients showed anxiety, indicated by a breathing frequency above 24 times per minute. Conclusions: Prostate motion is influenced by respiration in most fractions. Real-time intrafraction data are sensitive enough to measure the impact of respiration by use of wavelet decomposition methods. Although the average respiratory amplitude observed in this study is small, this technique provides a tool that can be useful if one moves to smaller treatment margins (≤5 mm). This also opens ups the possibility of being able to develop patient specific margins, knowing that prostate motion is not unpredictable.« less

  20. Finger tracking for hand-held device interface using profile-matching stereo vision

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Ping; Lee, Dah-Jye; Moore, Jason; Desai, Alok; Tippetts, Beau

    2013-01-01

    Hundreds of millions of people use hand-held devices frequently and control them by touching the screen with their fingers. If this method of operation is being used by people who are driving, the probability of deaths and accidents occurring substantially increases. With a non-contact control interface, people do not need to touch the screen. As a result, people will not need to pay as much attention to their phones and thus drive more safely than they would otherwise. This interface can be achieved with real-time stereovision. A novel Intensity Profile Shape-Matching Algorithm is able to obtain 3-D information from a pair of stereo images in real time. While this algorithm does have a trade-off between accuracy and processing speed, the result of this algorithm proves the accuracy is sufficient for the practical use of recognizing human poses and finger movement tracking. By choosing an interval of disparity, an object at a certain distance range can be segmented. In other words, we detect the object by its distance to the cameras. The advantage of this profile shape-matching algorithm is that detection of correspondences relies on the shape of profile and not on intensity values, which are subjected to lighting variations. Based on the resulting 3-D information, the movement of fingers in space from a specific distance can be determined. Finger location and movement can then be analyzed for non-contact control of hand-held devices.

  1. GEO Optical Data Association with Concurrent Metric and Photometric Information

    NASA Astrophysics Data System (ADS)

    Dao, P.; Monet, D.

    Data association in a congested area of the GEO belt with occasional visits by non-resident objects can be treated as a Multi-Target-Tracking (MTT) problem. For a stationary sensor surveilling the GEO belt, geosynchronous and near GEO objects are not completely motionless in the earth-fixed frame and can be observed as moving targets. In some clusters, metric or positional information is insufficiently accurate or up-to-date to associate the measurements. In the presence of measurements with uncertain origin, star tracks (residuals) and other sensor artifacts, heuristic techniques based on hard decision assignment do not perform adequately. In the MMT community, Bar-Shalom [2009 Bar-Shalom] was first in introducing the use of measurements to update the state of the target of interest in the tracking filter, e.g. Kalman filter. Following Bar-Shalom’s idea, we use the Probabilistic Data Association Filter (PDAF) but to make use of all information obtainable in the measurement of three-axis-stabilized GEO satellites, we combine photometric with metric measurements to update the filter. Therefore, our technique Concurrent Spatio- Temporal and Brightness (COSTB) has the stand-alone ability of associating a track with its identity –for resident objects. That is possible because the light curve of a stabilized GEO satellite changes minimally from night to night. We exercised COSTB on camera cadence data to associate measurements, correct mistags and detect non-residents in a simulated near real time cadence. Data on GEO clusters were used.

  2. Design, implementation and accuracy of a prototype for medical augmented reality.

    PubMed

    Pandya, Abhilash; Siadat, Mohammad-Reza; Auner, Greg

    2005-01-01

    This paper is focused on prototype development and accuracy evaluation of a medical Augmented Reality (AR) system. The accuracy of such a system is of critical importance for medical use, and is hence considered in detail. We analyze the individual error contributions and the system accuracy of the prototype. A passive articulated arm is used to track a calibrated end-effector-mounted video camera. The live video view is superimposed in real time with the synchronized graphical view of CT-derived segmented object(s) of interest within a phantom skull. The AR accuracy mostly depends on the accuracy of the tracking technology, the registration procedure, the camera calibration, and the image scanning device (e.g., a CT or MRI scanner). The accuracy of the Microscribe arm was measured to be 0.87 mm. After mounting the camera on the tracking device, the AR accuracy was measured to be 2.74 mm on average (standard deviation = 0.81 mm). After using data from a 2-mm-thick CT scan, the AR error remained essentially the same at an average of 2.75 mm (standard deviation = 1.19 mm). For neurosurgery, the acceptable error is approximately 2-3 mm, and our prototype approaches these accuracy requirements. The accuracy could be increased with a higher-fidelity tracking system and improved calibration and object registration. The design and methods of this prototype device can be extrapolated to current medical robotics (due to the kinematic similarity) and neuronavigation systems.

  3. U.S. Energy Disruptions

    EIA Publications

    EIA tracks and reports on selected significant storms that impact or could potentially impact energy infrastructure. See past historical events reported or real-time storm tracking with energy infrastructure maps.

  4. Developing an electronic system to manage and track emergency medications.

    PubMed

    Hamm, Mark W; Calabrese, Samuel V; Knoer, Scott J; Duty, Ashley M

    2018-03-01

    The development of a Web-based program to track and manage emergency medications with radio frequency identification (RFID) is described. At the Cleveland Clinic, medication kit restocking records and dispense locations were historically documented using a paper record-keeping system. The Cleveland Clinic investigated options to replace the paper-based tracking logs with a Web-based program that could track the real-time location and inventory of emergency medication kits. Vendor collaboration with a board of pharmacy (BOP) compliance inspector and pharmacy personnel resulted in the creation of a dual barcoding system using medication and pocket labels. The Web-based program was integrated with a Cleveland Clinic-developed asset tracking system using active RFID tags to give the real-time location of the medication kit. The Web-based program and the asset tracking system allowed identification of kits nearing expiration or containing recalled medications. Conversion from a paper-based system to a Web-based program began in October 2013. After 119 days, data were evaluated to assess the success of the conversion. Pharmacists spent an average of 27 minutes per day approving medication kits during the postimplementation period versus 102 minutes daily using the paper-based system, representing a 74% decrease in pharmacist time spent on this task. Prospective reports are generated monthly to allow the manager to assess the expected workload and adjust staffing for the next month. Implementation of a BOP-approved Web-based system for managing and tracking emergency medications with RFID integration decreased pharmacist review time, minimized compliance risk, and increased access to real-time data. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  5. Respiratory sinus arrhythmia as a measure of cognitive workload.

    PubMed

    Muth, Eric R; Moss, Jason D; Rosopa, Patrick J; Salley, James N; Walker, Alexander D

    2012-01-01

    The current standard for measuring cognitive workload is the NASA Task-load Index (TLX) questionnaire. Although this measure has a high degree of reliability, diagnosticity, and sensitivity, a reliable physiological measure of cognitive workload could provide a non-invasive, objective measure of workload that could be tracked in real or near real-time without interrupting the task. This study investigated changes in respiratory sinus arrhythmia (RSA) during seven different sub-sections of a proposed selection test for Navy aviation and compared them to changes reported on the NASA-TLX. 201 healthy participants performed the seven tasks of the Navy's Performance Based Measure. RSA was measured during each task and the NASA-TLX was administered after each task. Multi-level modeling revealed that RSA significantly predicted NASA-TLX scores. A moderate within-subject correlation was also found between RSA and NASA TLX scores. The findings support the potential development of RSA as a real-time measure of cognitive workload. Copyright © 2011. Published by Elsevier B.V.

  6. HARPs: Tracked Active Region Patch Data Product from SDO/HMI

    NASA Astrophysics Data System (ADS)

    Turmon, M.; Hoeksema, J. T.; Sun, X.; Bobra, M.

    2012-12-01

    We describe an HMI data product consisting of tracked magnetic features on the scale of solar active regions, abbreviated HARPs (HMI Active Region Patches). The HARP data series has been helpful for subsetting individual active regions, for development of near-real-time (NRT) space weather indices for individual active regions, and for defining closed magnetic structures for computationally-intensive algorithms like vector field disambiguation. The data series builds upon the 720s cadence activity masks, which identify large-scale instantaneously-observed magnetic features. Using these masks as a starting point, large spatially-coherent structures are identified using convolution with a longitudinally-extended kernel on a spherical domain. The resulting set of identified regions is then tracked from image to image. The metric for inter-image association is area of overlap between the best current estimate of AR location, as predicted by temporally extrapolating each currently tracked object, and the set of instantaneously-observed magnetic structures. Once completed tracks have been extracted, they are made into a standardized HARP data series by finding the smallest constant-angular-velocity box, of constant width in latitude and longitude, that encompasses all appearances of the active region. This data product is currently available, in definitive and near-real-time forms, with accompanying region-strength, location, and NOAA-AR metadata, on HMI's Joint Science Operations Center (JSOC) data portal.; HARP outlines for three days (2001 February 14, 15, and 16, 00:00 TAI, flipped N-S, selected from the 12-minute cadence original data product). HARPs are shown in the same color (some colors repeated) with a thin white box surrounding each HARP. HARPs are tracked and associated from image to image. HARPs, such as the yellow one in the images above, need not be connected regions. Merges and splits, such as the light blue region, are accounted for automatically.

  7. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    PubMed

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  8. Development and evaluation of a prototype tracking system using the treatment couch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Stephanie, E-mail: stephanie.lang@usz.ch; Riesterer, Oliver; Klöck, Stephan

    2014-02-15

    Purpose: Tumor motion increases safety margins around the clinical target volume and leads to an increased dose to the surrounding healthy tissue. The authors have developed and evaluated a one-dimensional treatment couch tracking system to counter steer respiratory tumor motion. Three different motion detection sensors with different lag times were evaluated. Methods: The couch tracking system consists of a motion detection sensor, which can be the topometrical system Topos (Cyber Technologies, Germany), the respiratory gating system RPM (Varian Medical Systems) or a laser triangulation system (Micro Epsilon), and the Protura treatment couch (Civco Medical Systems). The control of the treatmentmore » couch was implemented in the block diagram environment Simulink (MathWorks). To achieve real time performance, the Simulink models were executed on a real time engine, provided by Real-Time Windows Target (MathWorks). A proportional-integral control system was implemented. The lag time of the couch tracking system using the three different motion detection sensors was measured. The geometrical accuracy of the system was evaluated by measuring the mean absolute deviation from the reference (static position) during motion tracking. This deviation was compared to the mean absolute deviation without tracking and a reduction factor was defined. A hexapod system was moving according to seven respiration patterns previously acquired with the RPM system as well as according to a sin{sup 6} function with two different frequencies (0.33 and 0.17 Hz) and the treatment table compensated the motion. Results: A prototype system for treatment couch tracking of respiratory motion was developed. The laser based tracking system with a small lag time of 57 ms reduced the residual motion by a factor of 11.9 ± 5.5 (mean value ± standard deviation). An increase in delay time from 57 to 130 ms (RPM based system) resulted in a reduction by a factor of 4.7 ± 2.6. The Topos based tracking system with the largest lag time of 300 ms achieved a mean reduction by a factor of 3.4 ± 2.3. The increase in the penumbra of a profile (1 × 1 cm{sup 2}) for a motion of 6 mm was 1.4 mm. With tracking applied there was no increase in the penumbra. Conclusions: Couch tracking with the Protura treatment couch is achievable. To reliably track all possible respiration patterns without prediction filters a short lag time below 100 ms is needed. More scientific work is necessary to extend our prototype to tracking of internal motion.« less

  9. Paleotempestological Record of Intense Storms for the Northern Gulf of Mexico, United States

    NASA Astrophysics Data System (ADS)

    Bregy, J. C.; Wallace, D. J.

    2016-12-01

    Real-time measurements from many hundreds of GNSS tracking sites around the world are publicly available today, and the amount of streaming data is steadily increasing as national agencies densify their local and global infrastructure for natural hazard monitoring and a variety of geodetic, cadastral, and other civil applications. Thousands of such sites can soon be expected on a global scale. It is a challenge to manage and make optimal use of this massive amount of real-time data. We advocate the creation of Supersite(s), in the parlance of the U.N. Global Earth Observation System of Systems (https://www.earthobservations.org/geoss.php), to generate high level real-time data products from the raw GNSS measurements from all available sources (many thousands of sites). These products include: • High rate, real-time positioning time series for assessing rapid crustal motion due to Earthquakes, volcanic activities, land slides, etc. • Co-seismic displacement to help resolve earthquake mechanism and moment magnitude • Real-time total electron content (TEC) fluctuations to augment Dart buoy in detecting and tracking tsunamis • Aggregation of the many disparate raw data dispensation servers (Casters)Recognizing that natural hazards transcend national boundaries in terms of direct and indirect (e.g., economical, security) impact, the benefits from centralized, authoritative processing of GNSS measurements is manifold: • Offers a one-stop shop to less developed nations and institutions for raw and high-level products, in support of research and applications • Promotes the installation of tracking sites and the contribution of data from nations without the ability to process the data • Reduce dependency on local responsible agencies impacted by a natural disaster • Reliable 24/7 operations, independent of voluntary, best effort contributions from good-willing scientific organizationsThe JPL GNSS Real-Time Earthquake and Tsunami (GREAT) Alert has been operating as a prototype for such a Supersite for nearly a decade, processing in real-time data from hundreds of global and regional GNSS tracking sites. The existing operational infrastructure, complete self-sufficiency, and proven reliability can be leveraged at low cost to provide valuable natural hazard monitoring to the U.S. and the world.

  10. Indoor Pedestrian Navigation Using Foot-Mounted IMU and Portable Ultrasound Range Sensors

    PubMed Central

    Girard, Gabriel; Côté, Stéphane; Zlatanova, Sisi; Barette, Yannick; St-Pierre, Johanne; van Oosterom, Peter

    2011-01-01

    Many solutions have been proposed for indoor pedestrian navigation. Some rely on pre-installed sensor networks, which offer good accuracy but are limited to areas that have been prepared for that purpose, thus requiring an expensive and possibly time-consuming process. Such methods are therefore inappropriate for navigation in emergency situations since the power supply may be disturbed. Other types of solutions track the user without requiring a prepared environment. However, they may have low accuracy. Offline tracking has been proposed to increase accuracy, however this prevents users from knowing their position in real time. This paper describes a real time indoor navigation system that does not require prepared building environments and provides tracking accuracy superior to previously described tracking methods. The system uses a combination of four techniques: foot-mounted IMU (Inertial Motion Unit), ultrasonic ranging, particle filtering and model-based navigation. The very purpose of the project is to combine these four well-known techniques in a novel way to provide better indoor tracking results for pedestrians. PMID:22164034

  11. Validation of a method for real time foot position and orientation tracking with Microsoft Kinect technology for use in virtual reality and treadmill based gait training programs.

    PubMed

    Paolini, Gabriele; Peruzzi, Agnese; Mirelman, Anat; Cereatti, Andrea; Gaukrodger, Stephen; Hausdorff, Jeffrey M; Della Croce, Ugo

    2014-09-01

    The use of virtual reality for the provision of motor-cognitive gait training has been shown to be effective for a variety of patient populations. The interaction between the user and the virtual environment is achieved by tracking the motion of the body parts and replicating it in the virtual environment in real time. In this paper, we present the validation of a novel method for tracking foot position and orientation in real time, based on the Microsoft Kinect technology, to be used for gait training combined with virtual reality. The validation of the motion tracking method was performed by comparing the tracking performance of the new system against a stereo-photogrammetric system used as gold standard. Foot position errors were in the order of a few millimeters (average RMSD from 4.9 to 12.1 mm in the medio-lateral and vertical directions, from 19.4 to 26.5 mm in the anterior-posterior direction); the foot orientation errors were also small (average %RMSD from 5.6% to 8.8% in the medio-lateral and vertical directions, from 15.5% to 18.6% in the anterior-posterior direction). The results suggest that the proposed method can be effectively used to track feet motion in virtual reality and treadmill-based gait training programs.

  12. A Protocol for Real-time 3D Single Particle Tracking.

    PubMed

    Hou, Shangguo; Welsher, Kevin

    2018-01-03

    Real-time three-dimensional single particle tracking (RT-3D-SPT) has the potential to shed light on fast, 3D processes in cellular systems. Although various RT-3D-SPT methods have been put forward in recent years, tracking high speed 3D diffusing particles at low photon count rates remains a challenge. Moreover, RT-3D-SPT setups are generally complex and difficult to implement, limiting their widespread application to biological problems. This protocol presents a RT-3D-SPT system named 3D Dynamic Photon Localization Tracking (3D-DyPLoT), which can track particles with high diffusive speed (up to 20 µm 2 /s) at low photon count rates (down to 10 kHz). 3D-DyPLoT employs a 2D electro-optic deflector (2D-EOD) and a tunable acoustic gradient (TAG) lens to drive a single focused laser spot dynamically in 3D. Combined with an optimized position estimation algorithm, 3D-DyPLoT can lock onto single particles with high tracking speed and high localization precision. Owing to the single excitation and single detection path layout, 3D-DyPLoT is robust and easy to set up. This protocol discusses how to build 3D-DyPLoT step by step. First, the optical layout is described. Next, the system is calibrated and optimized by raster scanning a 190 nm fluorescent bead with the piezoelectric nanopositioner. Finally, to demonstrate real-time 3D tracking ability, 110 nm fluorescent beads are tracked in water.

  13. Along-Track Reef Imaging System (ATRIS)

    USGS Publications Warehouse

    Brock, John; Zawada, Dave

    2006-01-01

    "Along-Track Reef Imaging System (ATRIS)" describes the U.S. Geological Survey's Along-Track Reef Imaging System, a boat-based sensor package for rapidly mapping shallow water benthic environments. ATRIS acquires high resolution, color digital images that are accurately geo-located in real-time.

  14. Real-time eye tracking for the assessment of driver fatigue.

    PubMed

    Xu, Junli; Min, Jianliang; Hu, Jianfeng

    2018-04-01

    Eye-tracking is an important approach to collect evidence regarding some participants' driving fatigue. In this contribution, the authors present a non-intrusive system for evaluating driver fatigue by tracking eye movement behaviours. A real-time eye-tracker was used to monitor participants' eye state for collecting eye-movement data. These data are useful to get insights into assessing participants' fatigue state during monotonous driving. Ten healthy subjects performed continuous simulated driving for 1-2 h with eye state monitoring on a driving simulator in this study, and these measured features of the fixation time and the pupil area were recorded via using eye movement tracking device. For achieving a good cost-performance ratio and fast computation time, the fuzzy K -nearest neighbour was employed to evaluate and analyse the influence of different participants on the variations in the fixation duration and pupil area of drivers. The findings of this study indicated that there are significant differences in domain value distribution of the pupil area under the condition with normal and fatigue driving state. Result also suggests that the recognition accuracy by jackknife validation reaches to about 89% in average, implying that show a significant potential of real-time applicability of the proposed approach and is capable of detecting driver fatigue.

  15. Real-time eye tracking for the assessment of driver fatigue

    PubMed Central

    Xu, Junli; Min, Jianliang

    2018-01-01

    Eye-tracking is an important approach to collect evidence regarding some participants’ driving fatigue. In this contribution, the authors present a non-intrusive system for evaluating driver fatigue by tracking eye movement behaviours. A real-time eye-tracker was used to monitor participants’ eye state for collecting eye-movement data. These data are useful to get insights into assessing participants’ fatigue state during monotonous driving. Ten healthy subjects performed continuous simulated driving for 1–2 h with eye state monitoring on a driving simulator in this study, and these measured features of the fixation time and the pupil area were recorded via using eye movement tracking device. For achieving a good cost-performance ratio and fast computation time, the fuzzy K-nearest neighbour was employed to evaluate and analyse the influence of different participants on the variations in the fixation duration and pupil area of drivers. The findings of this study indicated that there are significant differences in domain value distribution of the pupil area under the condition with normal and fatigue driving state. Result also suggests that the recognition accuracy by jackknife validation reaches to about 89% in average, implying that show a significant potential of real-time applicability of the proposed approach and is capable of detecting driver fatigue. PMID:29750113

  16. DSN Resource Scheduling

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Baldwin, John

    2007-01-01

    TIGRAS is client-side software, which provides tracking-station equipment planning, allocation, and scheduling services to the DSMS (Deep Space Mission System). TIGRAS provides functions for schedulers to coordinate the DSN (Deep Space Network) antenna usage time and to resolve the resource usage conflicts among tracking passes, antenna calibrations, maintenance, and system testing activities. TIGRAS provides a fully integrated multi-pane graphical user interface for all scheduling operations. This is a great improvement over the legacy VAX VMS command line user interface. TIGRAS has the capability to handle all DSN resource scheduling aspects from long-range to real time. TIGRAS assists NASA mission operations for DSN tracking of station equipment resource request processes from long-range load forecasts (ten years or longer), to midrange, short-range, and real-time (less than one week) emergency tracking plan changes. TIGRAS can be operated by NASA mission operations worldwide to make schedule requests for the DSN station equipment.

  17. On-Line Tracking Controller for Brushless DC Motor Drives Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Rubaai, Ahmed

    1996-01-01

    A real-time control architecture is developed for time-varying nonlinear brushless dc motors operating in a high performance drives environment. The developed control architecture possesses the capabilities of simultaneous on-line identification and control. The dynamics of the motor are modeled on-line and controlled using an artificial neural network, as the system runs. The control architecture combines the experience and dependability of adaptive tracking systems with potential and promise of the neural computing technology. The sensitivity of real-time controller to parametric changes that occur during training is investigated. Such changes are usually manifested by rapid changes in the load of the brushless motor drives. This sudden change in the external load is simulated for the sigmoidal and sinusoidal reference tracks. The ability of the neuro-controller to maintain reasonable tracking accuracy in the presence of external noise is also verified for a number of desired reference trajectories.

  18. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  19. Telerobotic system concept for real-time soft-tissue imaging during radiotherapy beam delivery.

    PubMed

    Schlosser, Jeffrey; Salisbury, Kenneth; Hristov, Dimitre

    2010-12-01

    The curative potential of external beam radiation therapy is critically dependent on having the ability to accurately aim radiation beams at intended targets while avoiding surrounding healthy tissues. However, existing technologies are incapable of real-time, volumetric, soft-tissue imaging during radiation beam delivery, when accurate target tracking is most critical. The authors address this challenge in the development and evaluation of a novel, minimally interfering, telerobotic ultrasound (U.S.) imaging system that can be integrated with existing medical linear accelerators (LINACs) for therapy guidance. A customized human-safe robotic manipulator was designed and built to control the pressure and pitch of an abdominal U.S. transducer while avoiding LINAC gantry collisions. A haptic device was integrated to remotely control the robotic manipulator motion and U.S. image acquisition outside the LINAC room. The ability of the system to continuously maintain high quality prostate images was evaluated in volunteers over extended time periods. Treatment feasibility was assessed by comparing a clinically deployed prostate treatment plan to an alternative plan in which beam directions were restricted to sectors that did not interfere with the transabdominal U.S. transducer. To demonstrate imaging capability concurrent with delivery, robot performance and U.S. target tracking in a phantom were tested with a 15 MV radiation beam active. Remote image acquisition and maintenance of image quality with the haptic interface was successfully demonstrated over 10 min periods in representative treatment setups of volunteers. Furthermore, the robot's ability to maintain a constant probe force and desired pitch angle was unaffected by the LINAC beam. For a representative prostate patient, the dose-volume histogram (DVH) for a plan with restricted sectors remained virtually identical to the DVH of a clinically deployed plan. With reduced margins, as would be enabled by real-time imaging, gross tumor volume coverage was identical while notable reductions of bladder and rectal volumes exposed to large doses were possible. The quality of U.S. images obtained during beam operation was not appreciably degraded by radiofrequency interference and 2D tracking of a phantom object in U.S. images obtained with the beam on/off yielded no significant differences. Remotely controlled robotic U.S. imaging is feasible in the radiotherapy environment and for the first time may offer real-time volumetric soft-tissue guidance concurrent with radiotherapy delivery.

  20. Aerial video mosaicking using binary feature tracking

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2015-05-01

    Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point's image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.

  1. The Real-Time ObjectAgent Software Architecture for Distributed Satellite Systems

    DTIC Science & Technology

    2001-01-01

    real - time operating system selection are also discussed. The fourth section describes a simple demonstration of real-time ObjectAgent. Finally, the...experience with C++. After selecting the programming language, it was necessary to select a target real - time operating system (RTOS) and embedded...ObjectAgent software to run on the OSE Real Time Operating System . In addition, she is responsible for the integration of ObjectAgent

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caillet, V; Colvill, E; Royal North Shore Hospital, Sydney, NSW

    Purpose: The objective of this study was to investigate the dosimetric benefits of multi-leaf collimator (MLC) tracking for lung SABR treatments in end-to-end clinically realistic planning and delivery scenarios. Methods: The clinical benefits of MLC tracking were assessed using previously delivered treatment plans and physical experiments. The 10 most recent single lesion lung SABR patients were re-planned following a 4D-GTV-based real-time adaptive protocol (PTV defined as the end-of-exhalation GTV plus 5.0 mm margins). The plans were delivered on a Trilogy Varian linac. Electromagnetic transponders (Calypso, Varian Medical Systems, USA) were embedded into a programmable moving phantom (HexaMotion platform) tracked withmore » the Varian Calypso system. For each physical experiment, the MLC positions were collected and used as input for dose reconstruction. For both planned and physical experiments, the OAR dose metrics from the conventional and real-time adaptive SABR plans (Mean Lung Dose (MLD), V20 for lung, and near-maximum dose (D2%) for spine and heart) were statistically compared. The Wilcoxon test was used to compare plan and physical experiment dose metrics. Results: While maintaining target coverage, percentage reductions in dose metrics to the OARs were observed for both planned and physical experiments. Comparing the two plans showed MLD percentage reduction (MLDr) of 25.4% (absolute differences of 1.41 Gy) and 28.9% (1.29%) for the V20r. D2% percentage reduction for spine and heart were respectively 27.9% (0.3 Gy) and 20.2% (0.3 Gy). For the physical experiments, MLDr was 23.9% (1.3 Gy), and V20r 37.4% (1.6%). D2% reduction for spine and heart were respectively 27.3% (0.3 Gy) and 19.6% (0.3 Gy). For both plans and physical experiments, significant OAR dose differences (p<0.05) were found between the conventional SABR and real-time adaptive plans. Conclusion: Application of MLC tracking for lung SABR patients has the potential to reduce the dose to OARs during radiation therapy.« less

  3. Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Douglas, E-mail: douglas.moore@utsouthwestern.edu; Sawant, Amit; Ruan, Dan

    2016-01-15

    Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for themore » trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et al. approach, while performing comparably to Ruan and Keall. Conclusions: This work improves upon the quality of the Sawant et al. approach, but does so without sacrificing run-time performance. In addition, using this framework allows for complex leaf-fitting strategies that can be used to account for PTV/OAR trade-off during real-time MLC tracking.« less

  4. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    NASA Astrophysics Data System (ADS)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  5. ERP Estimation using a Kalman Filter in VLBI

    NASA Astrophysics Data System (ADS)

    Karbon, M.; Soja, B.; Nilsson, T.; Heinkelmann, R.; Liu, L.; Lu, C.; Mora-Diaz, J. A.; Raposo-Pulido, V.; Xu, M.; Schuh, H.

    2014-12-01

    Geodetic Very Long Baseline Interferometry (VLBI) is one of the primary space geodetic techniques, providing the full set of Earth Orientation Parameters (EOP), and it is unique for observing long term Universal Time (UT1). For applications such as satellite-based navigation and positioning, accurate and continuous ERP obtained in near real-time are essential. They also allow the precise tracking of interplanetary spacecraft. One of the goals of VGOS (VLBI Global Observing System) is to provide such near real-time ERP. With the launch of this next generation VLBI system, the International VLBI Service for Geodesy and Astrometry (IVS) increased its efforts not only to reach 1 mm accuracy on a global scale but also to reduce the time span between the collection of VLBI observations and the availability of the final results substantially. Project VLBI-ART contributes to these objectives by implementing an elaborate Kalman filter, which represents a perfect tool for analyzing VLBI data in quasi real-time. The goal is to implement it in the GFZ version of the Vienna VLBI Software (VieVS) as a completely automated tool, i.e., with no need for human interaction. Here we present the methodology and first results of Kalman filtered EOP from VLBI data.

  6. Evaluation of the Intel RealSense SR300 camera for image-guided interventions and application in vertebral level localization

    NASA Astrophysics Data System (ADS)

    House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.

  7. Can You Help Me with My Pitch? Studying a Tool for Real-Time Automated Feedback

    ERIC Educational Resources Information Center

    Schneider, Jan; Borner, Dirk; van Rosmalen, Peter; Specht, Marcus

    2016-01-01

    In our pursue to study effective real-time feedback in Technology Enhanced Learning, we developed the Presentation Trainer, a tool designed to support the practice of nonverbal communication skills for public speaking. The tool tracks the user's voice and body to analyze her performance, and selects the type of real-time feedback to be presented.…

  8. A discrete time-varying internal model-based approach for high precision tracking of a multi-axis servo gantry.

    PubMed

    Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing

    2014-09-01

    In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Video-CRM: understanding customer behaviors in stores

    NASA Astrophysics Data System (ADS)

    Haritaoglu, Ismail; Flickner, Myron; Beymer, David

    2013-03-01

    This paper describes two real-time computer vision systems created 10 years ago that detect and track people in stores to obtain insights of customer behavior while shopping. The first system uses a single color camera to identify shopping groups in the checkout line. Shopping groups are identified by analyzing the inter-body distances coupled with the cashier's activities to detect checkout transactions start and end times. The second system uses multiple overhead narrow-baseline stereo cameras to detect and track people, their body posture and parts to understand customer interactions with products such as "customer picking a product from a shelf". In pilot studies both systems demonstrated real-time performance and sufficient accuracy to enable more detailed understanding of customer behavior and extract actionable real-time retail analytics.

  10. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  11. The Way Point Planning Tool: Real Time Flight Planning for Airborne Science

    NASA Technical Reports Server (NTRS)

    He, Yubin; Blakeslee, Richard; Goodman, Michael; Hall, John

    2012-01-01

    Airborne real time observation are a major component of NASA's Earth Science research and satellite ground validation studies. For mission scientist, planning a research aircraft mission within the context of meeting the science objective is a complex task because it requires real time situational awareness of the weather conditions that affect the aircraft track. Multiple aircraft are often involved in the NASA field campaigns the coordination of the aircraft with satellite overpasses, other airplanes and the constantly evolving dynamic weather conditions often determine the success of the campaign. A flight planning tool is needed to provide situational awareness information to the mission scientist and help them plan and modify the flight tracks successfully. Scientists at the University of Alabama Huntsville and the NASA Marshal Space Flight Center developed the Waypoint Planning Tool (WPT), an interactive software tool that enables scientist to develop their own flight plans (also known as waypoints), with point and click mouse capabilities on a digital map filled with time raster and vector data. The development of this Waypoint Planning Tool demonstrates the significance of mission support in responding to the challenges presented during NASA field campaigns. Analyses during and after each campaign helped identify both issues and new requirements, initiating the next wave of development. Currently the Waypoint Planning Tool has gone through three rounds of development and analysis processes. The development of this waypoint tool is directly affected by the technology advances on GIS/Mapping technologies. From the standalone Google Earth application and simple KML functionalities to the Google Earth Plugin and Java Web Start/Applet on web platform, as well as to the rising open source GIS tools with new JavaScript frameworks, the Waypoint planning Tool has entered its third phase of technology advancement. The newly innovated, cross-platform, modular designed JavaScript-controled Waypoint tool is planned to be integrated with the NASA Airborne Science Mission Tool Suite. Adapting new technologies for the Waypoint Planning Tool ensures its success in helping scientist reach their mission objectives. This presentation will discuss the development process of the Waypoint Planning Tool in responding to field campaign challenges, identifying new information technologies, and describing the capabilities and features of the Waypoint Planning Tool with the real time aspect, interactive nature, and the resultant benefits to the airborne science community.

  12. SLS-PLAN-IT: A knowledge-based blackboard scheduling system for Spacelab life sciences missions

    NASA Technical Reports Server (NTRS)

    Kao, Cheng-Yan; Lee, Seok-Hua

    1992-01-01

    The primary scheduling tool in use during the Spacelab Life Science (SLS-1) planning phase was the operations research (OR) based, tabular form Experiment Scheduling System (ESS) developed by NASA Marshall. PLAN-IT is an artificial intelligence based interactive graphic timeline editor for ESS developed by JPL. The PLAN-IT software was enhanced for use in the scheduling of Spacelab experiments to support the SLS missions. The enhanced software SLS-PLAN-IT System was used to support the real-time reactive scheduling task during the SLS-1 mission. SLS-PLAN-IT is a frame-based blackboard scheduling shell which, from scheduling input, creates resource-requiring event duration objects and resource-usage duration objects. The blackboard structure is to keep track of the effects of event duration objects on the resource usage objects. Various scheduling heuristics are coded in procedural form and can be invoked any time at the user's request. The system architecture is described along with what has been learned with the SLS-PLAN-IT project.

  13. Analysis of simulated image sequences from sensors for restricted-visibility operations

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar

    1991-01-01

    A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.

  14. Resisting Temptation: Tracking How Self-Control Conflicts Are Successfully Resolved in Real Time.

    PubMed

    Stillman, Paul E; Medvedev, Danila; Ferguson, Melissa J

    2017-09-01

    Across four studies, we used mouse tracking to identify the dynamic, on-line cognitive processes that underlie successful self-control decisions. First, we showed that individuals display real-time conflict when choosing options consistent with their long-term goal over short-term temptations. Second, we found that individuals who are more successful at self-control-whether measured or manipulated-show significantly less real-time conflict in only self-control-relevant choices. Third, we demonstrated that successful individuals who choose a long-term goal over a short-term temptation display movements that are smooth rather than abrupt, which suggests dynamic rather than stage-based resolution of self-control conflicts. These findings have important implications for contemporary theories of self-control.

  15. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    PubMed

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  16. Real-Time Correction By Optical Tracking with Integrated Geometric Distortion Correction for Reducing Motion Artifacts in fMRI

    NASA Astrophysics Data System (ADS)

    Rotenberg, David J.

    Artifacts caused by head motion are a substantial source of error in fMRI that limits its use in neuroscience research and clinical settings. Real-time scan-plane correction by optical tracking has been shown to correct slice misalignment and non-linear spin-history artifacts, however residual artifacts due to dynamic magnetic field non-uniformity may remain in the data. A recently developed correction technique, PLACE, can correct for absolute geometric distortion using the complex image data from two EPI images, with slightly shifted k-space trajectories. We present a correction approach that integrates PLACE into a real-time scan-plane update system by optical tracking, applied to a tissue-equivalent phantom undergoing complex motion and an fMRI finger tapping experiment with overt head motion to induce dynamic field non-uniformity. Experiments suggest that including volume by volume geometric distortion correction by PLACE can suppress dynamic geometric distortion artifacts in a phantom and in vivo and provide more robust activation maps.

  17. Vision-based guidance for an automated roving vehicle

    NASA Technical Reports Server (NTRS)

    Griffin, M. D.; Cunningham, R. T.; Eskenazi, R.

    1978-01-01

    A controller designed to guide an automated vehicle to a specified target without external intervention is described. The intended application is to the requirements of planetary exploration, where substantial autonomy is required because of the prohibitive time lags associated with closed-loop ground control. The guidance algorithm consists of a set of piecewise-linear control laws for velocity and steering commands, and is executable in real time with fixed-point arithmetic. The use of a previously-reported object tracking algorithm for the vision system to provide position feedback data is described. Test results of the control system on a breadboard rover at the Jet Propulsion Laboratory are included.

  18. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  19. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    PubMed

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-03-21

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.

  20. Real-time Avatar Animation from a Single Image.

    PubMed

    Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F

    2011-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.

  1. Real-time Avatar Animation from a Single Image

    PubMed Central

    Saragih, Jason M.; Lucey, Simon; Cohn, Jeffrey F.

    2014-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user’s facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters. PMID:24598812

  2. A composite controller for trajectory tracking applied to the Furuta pendulum.

    PubMed

    Aguilar-Avelar, Carlos; Moreno-Valenzuela, Javier

    2015-07-01

    In this paper, a new composite scheme is proposed, where the total control action is composed of the sum of a feedback-linearization-based controller and an energy-based compensation. This new proposition is applied to the rotary inverted pendulum or Furuta pendulum. The Furuta pendulum is a well-known underactuated mechanical system with two degrees of freedom. The control objective in this case is the tracking of a desired periodic trajectory in the actuated joint, while the unactuated link is regulated at the upward position. The closed-loop system is analyzed showing uniformly ultimately boundedness of the error trajectories. The design procedure is shown in a constructive form, such that it may be applied to other underactuated mechanical systems, with the proper definitions of the output function and the energy function. Numerical simulations and real-time experiments show the practical viability of the controller. Finally, the proposed algorithm is compared with a tracking controller previously reported in the literature. The new algorithm shows better performance in both arm trajectory tracking and pendulum regulation. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Teleoperation of Robonaut Using Finger Tracking

    NASA Technical Reports Server (NTRS)

    Champoux, Rachel G.; Luo, Victor

    2012-01-01

    With the advent of new finger tracking systems, the idea of a more expressive and intuitive user interface is being explored and implemented. One practical application for this new kind of interface is that of teleoperating a robot. For humanoid robots, a finger tracking interface is required due to the level of complexity in a human-like hand, where a joystick isn't accurate. Moreover, for some tasks, using one's own hands allows the user to communicate their intentions more effectively than other input. The purpose of this project was to develop a natural user interface for someone to teleoperate a robot that is elsewhere. Specifically, this was designed to control Robonaut on the international space station to do tasks too dangerous and/or too trivial for human astronauts. This interface was developed by integrating and modifying 3Gear's software, which includes a library of gestures and the ability to track hands. The end result is an interface in which the user can manipulate objects in real time in the user interface. then, the information is relayed to a simulator, the stand in for Robonaut, at a slight delay.

  4. Enhanced Algorithms for EO/IR Electronic Stabilization, Clutter Suppression, and Track-Before-Detect for Multiple Low Observable Targets

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Brown, A.; Brown, J.

    The paper describes the development and evaluation of a suite of advanced algorithms which provide significantly-improved capabilities for finding, fixing, and tracking multiple ballistic and flying low observable objects in highly stressing cluttered environments. The algorithms have been developed for use in satellite-based staring and scanning optical surveillance suites for applications including theatre and intercontinental ballistic missile early warning, trajectory prediction, and multi-sensor track handoff for midcourse discrimination and intercept. The functions performed by the algorithms include electronic sensor motion compensation providing sub-pixel stabilization (to 1/100 of a pixel), as well as advanced temporal-spatial clutter estimation and suppression to below sensor noise levels, followed by statistical background modeling and Bayesian multiple-target track-before-detect filtering. The multiple-target tracking is performed in physical world coordinates to allow for multi-sensor fusion, trajectory prediction, and intercept. Output of detected object cues and data visualization are also provided. The algorithms are designed to handle a wide variety of real-world challenges. Imaged scenes may be highly complex and infinitely varied -- the scene background may contain significant celestial, earth limb, or terrestrial clutter. For example, when viewing combined earth limb and terrestrial scenes, a combination of stationary and non-stationary clutter may be present, including cloud formations, varying atmospheric transmittance and reflectance of sunlight and other celestial light sources, aurora, glint off sea surfaces, and varied natural and man-made terrain features. The targets of interest may also appear to be dim, relative to the scene background, rendering much of the existing deployed software useless for optical target detection and tracking. Additionally, it may be necessary to detect and track a large number of objects in the threat cloud, and these objects may not always be resolvable in individual data frames. In the present paper, the performance of the developed algorithms is demonstrated using real-world data containing resident space objects observed from the MSX platform, with backgrounds varying from celestial to combined celestial and earth limb, with instances of extremely bright aurora clutter. Simulation results are also presented for parameterized variations in signal-to-clutter levels (down to 1/1000) and signal-to-noise levels (down to 1/6) for simulated targets against real-world terrestrial clutter backgrounds. We also discuss algorithm processing requirements and C++ software processing capabilities from our on-going MDA- and AFRL-sponsored development of an image processing toolkit (iPTK). In the current effort, the iPTK is being developed to a Technology Readiness Level (TRL) of 6 by mid-2010, in preparation for possible integration with STSS-like, SBIRS high-like and SBSS-like surveillance suites.

  5. Safety of real-time convection-enhanced delivery of liposomes to primate brain: a long-term retrospective.

    PubMed

    Krauze, Michal T; Vandenberg, Scott R; Yamashita, Yoji; Saito, Ryuta; Forsayeth, John; Noble, Charles; Park, John; Bankiewicz, Krystof S

    2008-04-01

    Convection-enhanced delivery (CED) is gaining popularity in direct brain infusions. Our group has pioneered the use of liposomes loaded with the MRI contrast reagent as a means to track and quantitate CED in the primate brain through real-time MRI. When co-infused with therapeutic nanoparticles, these tracking liposomes provide us with unprecedented precision in the management of infusions into discrete brain regions. In order to translate real-time CED into clinical application, several important parameters must be defined. In this study, we have analyzed all our cumulative animal data to answer a number of questions as to whether real-time CED in primates depends on concentration of infusate, is reproducible, allows prediction of distribution in a given anatomic structure, and whether it has long term pathological consequences. Our retrospective analysis indicates that real-time CED is highly predictable; repeated procedures yielded identical results, and no long-term brain pathologies were found. We conclude that introduction of our technique to clinical application would enhance accuracy and patient safety when compared to current non-monitored delivery trials.

  6. Anti-Runaway Prevention System with Wireless Sensors for Intelligent Track Skates at Railway Stations.

    PubMed

    Jiang, Chaozhe; Xu, Yibo; Wen, Chao; Chen, Dilin

    2017-12-19

    Anti-runaway prevention of rolling stocks at a railway station is essential in railway safety management. The traditional track skates for anti-runaway prevention of rolling stocks have some disadvantages since they are operated and monitored completely manually. This paper describes an anti-runaway prevention system (ARPS) based on intelligent track skates equipped with sensors and real-time monitoring and management system. This system, which has been updated from the traditional track skates, comprises four parts: intelligent track skates, a signal reader, a database station, and a monitoring system. This system can monitor the real-time situation of track skates without changing their workflow for anti-runaway prevention, and thus realize the integration of anti-runaway prevention information management. This system was successfully tested and practiced at Sunjia station in Harbin Railway Bureau in 2014, and the results confirmed that the system showed 100% accuracy in reflecting the usage status of the track skates. The system could meet practical demands, as it is highly reliable and supports long-distance communication.

  7. Evaluation of an image-based tracking workflow with Kalman filtering for automatic image plane alignment in interventional MRI.

    PubMed

    Neumann, M; Cuvillon, L; Breton, E; de Matheli, M

    2013-01-01

    Recently, a workflow for magnetic resonance (MR) image plane alignment based on tracking in real-time MR images was introduced. The workflow is based on a tracking device composed of 2 resonant micro-coils and a passive marker, and allows for tracking of the passive marker in clinical real-time images and automatic (re-)initialization using the microcoils. As the Kalman filter has proven its benefit as an estimator and predictor, it is well suited for use in tracking applications. In this paper, a Kalman filter is integrated in the previously developed workflow in order to predict position and orientation of the tracking device. Measurement noise covariances of the Kalman filter are dynamically changed in order to take into account that, according to the image plane orientation, only a subset of the 3D pose components is available. The improved tracking performance of the Kalman extended workflow could be quantified in simulation results. Also, a first experiment in the MRI scanner was performed but without quantitative results yet.

  8. Anti-Runaway Prevention System with Wireless Sensors for Intelligent Track Skates at Railway Stations

    PubMed Central

    Jiang, Chaozhe; Xu, Yibo; Chen, Dilin

    2017-01-01

    Anti-runaway prevention of rolling stocks at a railway station is essential in railway safety management. The traditional track skates for anti-runaway prevention of rolling stocks have some disadvantages since they are operated and monitored completely manually. This paper describes an anti-runaway prevention system (ARPS) based on intelligent track skates equipped with sensors and real-time monitoring and management system. This system, which has been updated from the traditional track skates, comprises four parts: intelligent track skates, a signal reader, a database station, and a monitoring system. This system can monitor the real-time situation of track skates without changing their workflow for anti-runaway prevention, and thus realize the integration of anti-runaway prevention information management. This system was successfully tested and practiced at Sunjia station in Harbin Railway Bureau in 2014, and the results confirmed that the system showed 100% accuracy in reflecting the usage status of the track skates. The system could meet practical demands, as it is highly reliable and supports long-distance communication. PMID:29257108

  9. Three-dimensional face pose detection and tracking using monocular videos: tool and application.

    PubMed

    Dornaika, Fadi; Raducanu, Bogdan

    2009-08-01

    Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.

  10. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm.

    PubMed

    Tombu, Michael; Seiffert, Adriane E

    2011-04-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target-distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking--one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone.

  11. Trends in Correlation-Based Pattern Recognition and Tracking in Forward-Looking Infrared Imagery

    PubMed Central

    Alam, Mohammad S.; Bhuiyan, Sharif M. A.

    2014-01-01

    In this paper, we review the recent trends and advancements on correlation-based pattern recognition and tracking in forward-looking infrared (FLIR) imagery. In particular, we discuss matched filter-based correlation techniques for target detection and tracking which are widely used for various real time applications. We analyze and present test results involving recently reported matched filters such as the maximum average correlation height (MACH) filter and its variants, and distance classifier correlation filter (DCCF) and its variants. Test results are presented for both single/multiple target detection and tracking using various real-life FLIR image sequences. PMID:25061840

  12. The influence of real-time rural transit tracking on traveler perception.

    DOT National Transportation Integrated Search

    2013-03-01

    Public transportation systems require accurate and reliable information as part of their : day-to-day operations and are increasingly engaging their customers through a variety of online : services and smart phone applications, such as real-time vehi...

  13. Novel real-time tumor-contouring method using deep learning to prevent mistracking in X-ray fluoroscopy.

    PubMed

    Terunuma, Toshiyuki; Tokui, Aoi; Sakae, Takeji

    2018-03-01

    Robustness to obstacles is the most important factor necessary to achieve accurate tumor tracking without fiducial markers. Some high-density structures, such as bone, are enhanced on X-ray fluoroscopic images, which cause tumor mistracking. Tumor tracking should be performed by controlling "importance recognition": the understanding that soft-tissue is an important tracking feature and bone structure is unimportant. We propose a new real-time tumor-contouring method that uses deep learning with importance recognition control. The novelty of the proposed method is the combination of the devised random overlay method and supervised deep learning to induce the recognition of structures in tumor contouring as important or unimportant. This method can be used for tumor contouring because it uses deep learning to perform image segmentation. Our results from a simulated fluoroscopy model showed accurate tracking of a low-visibility tumor with an error of approximately 1 mm, even if enhanced bone structure acted as an obstacle. A high similarity of approximately 0.95 on the Jaccard index was observed between the segmented and ground truth tumor regions. A short processing time of 25 ms was achieved. The results of this simulated fluoroscopy model support the feasibility of robust real-time tumor contouring with fluoroscopy. Further studies using clinical fluoroscopy are highly anticipated.

  14. Acting to gain information: Real-time reasoning meets real-time perception

    NASA Technical Reports Server (NTRS)

    Rosenschein, Stan

    1994-01-01

    Recent advances in intelligent reactive systems suggest new approaches to the problem of deriving task-relevant information from perceptual systems in real time. The author will describe work in progress aimed at coupling intelligent control mechanisms to real-time perception systems, with special emphasis on frame rate visual measurement systems. A model for integrated reasoning and perception will be discussed, and recent progress in applying these ideas to problems of sensor utilization for efficient recognition and tracking will be described.

  15. Dynamic ADMM for Real-Time Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Yijian; Hong, Mingyi

    This paper considers distribution networks featuring distributed energy resources (DERs), and develops a dynamic optimization method to maximize given operational objectives in real time while adhering to relevant network constraints. The design of the dynamic algorithm is based on suitable linearization of the AC power flow equations, and it leverages the so-called alternating direction method of multipliers (ADMM). The steps of the ADMM, however, are suitably modified to accommodate appropriate measurements from the distribution network and the DERs. With the aid of these measurements, the resultant algorithm can enforce given operational constraints in spite of inaccuracies in the representation ofmore » the AC power flows, and it avoids ubiquitous metering to gather the state of noncontrollable resources. Optimality and convergence of the proposed algorithm are established in terms of tracking of the solution of a convex surrogate of the AC optimal power flow problem.« less

  16. Dynamic ADMM for Real-Time Optimal Power Flow: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Yijian; Hong, Mingyi

    This paper considers distribution networks featuring distributed energy resources (DERs), and develops a dynamic optimization method to maximize given operational objectives in real time while adhering to relevant network constraints. The design of the dynamic algorithm is based on suitable linearizations of the AC power flow equations, and it leverages the so-called alternating direction method of multipliers (ADMM). The steps of the ADMM, however, are suitably modified to accommodate appropriate measurements from the distribution network and the DERs. With the aid of these measurements, the resultant algorithm can enforce given operational constraints in spite of inaccuracies in the representation ofmore » the AC power flows, and it avoids ubiquitous metering to gather the state of non-controllable resources. Optimality and convergence of the propose algorithm are established in terms of tracking of the solution of a convex surrogate of the AC optimal power flow problem.« less

  17. Sub-micron accurate track navigation method ``Navi'' for the analysis of Nuclear Emulsion

    NASA Astrophysics Data System (ADS)

    Yoshioka, T.; Yoshida, J.; Kodama, K.

    2011-03-01

    Sub-micron accurate track navigation in Nuclear Emulsion is realized by using low energy signals detected by automated Nuclear Emulsion read-out systems. Using those much dense ``noise'', about 104 times larger than the real tracks, the accuracy of the track position navigation reaches to be sub micron only by using the information of a microscope field of view, 200 micron times 200 micron. This method is applied to OPERA analysis in Japan, i.e. support of human eye checks of the candidate tracks, confirmation of neutrino interaction vertexes and to embed missing track segments to the track data read-out by automated systems.

  18. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  19. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  20. Real Time Target Tracking Using Dedicated Vision Hardware

    NASA Astrophysics Data System (ADS)

    Kambies, Keith; Walsh, Peter

    1988-03-01

    This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.

  1. SU-G-JeP4-12: Real-Time Organ Motion Monitoring Using Ultrasound and KV Fluoroscopy During Lung SBRT Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omari, E; Tai, A; Li, X

    Purpose: Real-time ultrasound monitoring during SBRT is advantageous in understanding and identifying motion irregularities which may cause geometric misses. In this work, we propose to utilize real-time ultrasound to track the diaphragm in conjunction with periodical kV fluoroscopy to monitor motion of tumor or landmarks during SBRT delivery. Methods: Transabdominal Ultrasound (TAUS) b-mode images were collected from 10 healthy volunteers using the Clarity Autoscan System (Elekta). The autoscan transducer, which has a center frequency of 5 MHz, was utilized for the scans. The acquired images were contoured using the Clarity Automatic Fusion and Contouring workstation software. Monitoring sessions of 5more » minute length were observed and recorded. The position correlation between tumor and diaphragm could be established with periodic kV fluoroscopy periodically acquired during treatment with Elekta XVI. We acquired data using a tissue mimicking ultrasound phantom with embedded spheres placed on a motion stand using ultrasound and kV Fluoroscopy. MIM software was utilized for image fusion. Correlation of diaphragm and target motion was also validated using 4D-MRI and 4D-CBCT. Results: The diaphragm was visualized as a hyperechoic region on the TAUS b-mode images. Volunteer set-up can be adjusted such that TAUS probe will not interfere with treatment beams. A segment of the diaphragm was contoured and selected as our tracking structure. Successful monitoring sessions of the diaphragm were recorded. For some volunteers, diaphragm motion over 2 times larger than the initial motion has been observed during tracking. For the phantom study, we were able to register the 2D kV Fluoroscopy with the US images for position comparison. Conclusion: We demonstrated the feasibility of tracking the diaphragm using real-time ultrasound. Real-time tracking can help in identifying such irregularities in the respiratory motion which is correlated to tumor motion. We also showed the feasibility of acquiring 2D KV Fluoroscopy and registering the images with Ultrasound.« less

  2. Edge-following algorithm for tracking geological features

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.

    1977-01-01

    Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.

  3. Detection of Ballast Damage by In-Situ Vibration Measurement of Sleepers

    NASA Astrophysics Data System (ADS)

    Lam, H. F.; Wong, M. T.; Keefe, R. M.

    2010-05-01

    Ballasted track is one of the most important elements of railway transportation systems worldwide. Owing to its importance in railway safety, many monitoring and evaluation methods have been developed. Current railway track monitoring systems are comprehensive, fast and efficient in testing railway track level and alignment, rail gauge, rail corrugation, etc. However, the monitoring of ballast condition still relies very much on visual inspection and core tests. Although extensive research has been carried out in the development of non-destructive methods for ballast condition evaluation, a commonly accepted and cost-effective method is still in demand. In Hong Kong practice, if abnormal train vibration is reported by the train operator or passengers, permanent way inspectors will locate the problem area by track geometry measurement. It must be pointed out that visual inspection can only identify ballast damage on the track surface, the track geometry deficiencies and rail twists can be detected using a track gauge. Ballast damage under the sleeper loading area and the ballast shoulder, which are the main factors affecting track stability and ride quality, are extremely difficult if not impossible to be detected by visual inspection. Core test is a destructive test, which is expensive, time consuming and may be disruptive to traffic. A fast real-time ballast damage detection method that can be implemented by permanent way inspectors with simple equipment can certainly provide valuable information for engineers in assessing the safety and riding quality of ballasted track systems. The main objective of this paper is to study the feasibility in using the vibration characteristics of sleepers in quantifying the ballast condition under the sleepers, and so as to explore the possibility in developing a handy method for the detection of ballast damage based on the measured vibration of sleepers.

  4. Toward the development of intrafraction tumor deformation tracking using a dynamic multi-leaf collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Yuanyuan; O’Brien, Ricky T.; Shieh, Chun-Chien

    Purpose: Intrafraction deformation limits targeting accuracy in radiotherapy. Studies show tumor deformation of over 10 mm for both single tumor deformation and system deformation (due to differential motion between primary tumors and involved lymph nodes). Such deformation cannot be adapted to with current radiotherapy methods. The objective of this study was to develop and experimentally investigate the ability of a dynamic multi-leaf collimator (DMLC) tracking system to account for tumor deformation. Methods: To compensate for tumor deformation, the DMLC tracking strategy is to warp the planned beam aperture directly to conform to the new tumor shape based on real timemore » tumor deformation input. Two deformable phantoms that correspond to a single tumor and a tumor system were developed. The planar deformations derived from the phantom images in beam's eye view were used to guide the aperture warping. An in-house deformable image registration software was developed to automatically trigger the registration once new target image was acquired and send the computed deformation to the DMLC tracking software. Because the registration speed is not fast enough to implement the experiment in real-time manner, the phantom deformation only proceeded to the next position until registration of the current deformation position was completed. The deformation tracking accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the ideal aperture. The individual contributions from the deformable registration algorithm and the finite leaf width to the tracking uncertainty were analyzed. Clinical proof-of-principle experiment of deformation tracking using previously acquired MR images of a lung cancer patient was implemented to represent the MRI-Linac environment. Intensity-modulated radiation therapy (IMRT) treatment delivered with enabled deformation tracking was simulated and demonstrated. Results: The first experimental investigation of adapting to tumor deformation has been performed using simple deformable phantoms. For the single tumor deformation, the A{sub u}+A{sub o} was reduced over 56% when deformation was larger than 2 mm. Overall, the total improvement was 82%. For the tumor system deformation, the A{sub u}+A{sub o} reductions were all above 75% and the total A{sub u}+A{sub o} improvement was 86%. Similar coverage improvement was also found in simulating deformation tracking during IMRT delivery. The deformable image registration algorithm was identified as the dominant contributor to the tracking error rather than the finite leaf width. The discrepancy between the warped beam shape and the ideal beam shape due to the deformable registration was observed to be partially compensated during leaf fitting due to the finite leaf width. The clinical proof-of-principle experiment demonstrated the feasibility of intrafraction deformable tracking for clinical scenarios. Conclusions: For the first time, we developed and demonstrated an experimental system that is capable of adapting the MLC aperture to account for tumor deformation. This work provides a potentially widely available management method to effectively account for intrafractional tumor deformation. This proof-of-principle study is the first experimental step toward the development of an image-guided radiotherapy system to treat deforming tumors in real-time.« less

  5. Multiple Hypothesis Tracking (MHT) for Space Surveillance: Results and Simulation Studies

    NASA Astrophysics Data System (ADS)

    Singh, N.; Poore, A.; Sheaff, C.; Aristoff, J.; Jah, M.

    2013-09-01

    With the anticipated installation of more accurate sensors and the increased probability of future collisions between space objects, the potential number of observable space objects is likely to increase by an order of magnitude within the next decade, thereby placing an ever-increasing burden on current operational systems. Moreover, the need to track closely-spaced objects due, for example, to breakups as illustrated by the recent Chinese ASAT test or the Iridium-Kosmos collision, requires new, robust, and autonomous methods for space surveillance to enable the development and maintenance of the present and future space catalog and to support the overall space surveillance mission. The problem of correctly associating a stream of uncorrelated tracks (UCTs) and uncorrelated optical observations (UCOs) into common objects is critical to mitigating the number of UCTs and is a prerequisite to subsequent space catalog maintenance. Presently, such association operations are mainly performed using non-statistical simple fixed-gate association logic. In this paper, we report on the salient features and the performance of a newly-developed statistically-robust system-level multiple hypothesis tracking (MHT) system for advanced space surveillance. The multiple-frame assignment (MFA) formulation of MHT, together with supporting astrodynamics algorithms, provides a new joint capability for space catalog maintenance, UCT/UCO resolution, and initial orbit determination. The MFA-MHT framework incorporates multiple hypotheses for report to system track data association and uses a multi-arc construction to accommodate recently developed algorithms for multiple hypothesis filtering (e.g., AEGIS, CAR-MHF, UMAP, and MMAE). This MHT framework allows us to evaluate the benefits of many different algorithms ranging from single- and multiple-frame data association to filtering and uncertainty quantification. In this paper, it will be shown that the MHT system can provide superior tracking performance compared to existing methods at a lower computational cost, especially for closely-spaced objects, in realistic multi-sensor multi-object tracking scenarios over multiple regimes of space. Specifically, we demonstrate that the prototype MHT system can accurately and efficiently process tens of thousands of UCTs and angles-only UCOs emanating from thousands of objects in LEO, GEO, MEO and HELO, many of which are closely-spaced, in real-time on a single laptop computer, thereby making it well-suited for large-scale breakup and tracking scenarios. This is possible in part because complexity reduction techniques are used to control the runtime of MHT without sacrificing accuracy. We assess the performance of MHT in relation to other tracking methods in multi-target, multi-sensor scenarios ranging from easy to difficult (i.e., widely-spaced objects to closely-spaced objects), using realistic physics and probabilities of detection less than one. In LEO, it is shown that the MHT system is able to address the challenges of processing breakups by analyzing multiple frames of data simultaneously in order to improve association decisions, reduce cross-tagging, and reduce unassociated UCTs. As a result, the multi-frame MHT system can establish orbits up to ten times faster than single-frame methods. Finally, it is shown that in GEO, MEO and HELO, the MHT system is able to address the challenges of processing angles-only optical observations by providing a unified multi-frame framework.

  6. Dosimetry of heavy ions by use of CCD detectors

    NASA Technical Reports Server (NTRS)

    Schott, J. U.

    1994-01-01

    The design and the atomic composition of Charge Coupled Devices (CCD's) make them unique for investigations of single energetic particle events. As detector system for ionizing particles they detect single particles with local resolution and near real time particle tracking. In combination with its properties as optical sensor, particle transversals of single particles are to be correlated to any objects attached to the light sensitive surface of the sensor by simple imaging of their shadow and subsequent image analysis of both, optical image and particle effects, observed in affected pixels. With biological objects it is possible for the first time to investigate effects of single heavy ions in tissue or extinguished organs of metabolizing (i.e. moving) systems with a local resolution better than 15 microns. Calibration data for particle detection in CCD's are presented for low energetic protons and heavy ions.

  7. Real-time simulation of thermal shadows with EMIT

    NASA Astrophysics Data System (ADS)

    Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul

    2016-05-01

    Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.

  8. The Smallest R/V: A Small-scale Ocean Exploration Demonstration of Real-time Bathymetric Measurements

    NASA Astrophysics Data System (ADS)

    Howell, S. M.; Boston, B.; Maher, S. M.; Sleeper, J. D.; Togia, H.; Tree, J. P.

    2014-12-01

    In October 2013, graduate student members of the University of Hawaii Geophysical Society designed a small-scale model research vessel (R/V) that uses sonar to create 3D maps of a model seafloor in real-time. This pilot project was presented to the public at the School of Ocean and Earth Science and Technology's (SOEST) Biennial Open House weekend. An estimated 7,600 people attended the two-day event, including children and teachers from Hawaii's schools, home school students, community groups, families, and science enthusiasts. Our exhibit demonstrated real-time sonar mapping of a cardboard volcano using a toy size research vessel on a fixed 2D model ship track suspended above a model seafloor. Sound wave travel times were recorded using an unltrasonic emitter/receiver attached to an Arduino microcontroller platform, while the same system measured displacement along the ship track. This data was streamed through a USB connection to a PC running MatLab, where a 3D model was updated as the ship collected data. Our exhibit demonstrates the practical use of complicated concepts, like wave physics and data processing, in a way that even the youngest elementary students are able to understand. It provides an accessible avenue to learn about sonar mapping, and could easily be adapted to talk about bat and marine mammal echolocation by replacing the model ship and volcano. The exhibit received an overwhelmingly positive response from attendees, and has inspired the group to develop a more interactive model for future exhibitions, using multiple objects to be mapped that participants could arrange, and a more robust ship movement system that participants could operate.

  9. The dosimetric impact of inversely optimized arc radiotherapy plan modulation for real-time dynamic MLC tracking delivery.

    PubMed

    Falk, Marianne; Larsson, Tobias; Keall, Paul; Chul Cho, Byung; Aznar, Marianne; Korreman, Stine; Poulsen, Per; Munck Af Rosenschold, Per

    2012-03-01

    Real-time dynamic multileaf collimator (MLC) tracking for management of intrafraction tumor motion can be challenging for highly modulated beams, as the leaves need to travel far to adjust for target motion perpendicular to the leaf travel direction. The plan modulation can be reduced by using a leaf position constraint (LPC) that reduces the difference in the position of adjacent MLC leaves in the plan. The purpose of this study was to investigate the impact of the LPC on the quality of inversely optimized arc radiotherapy plans and the effect of the MLC motion pattern on the dosimetric accuracy of MLC tracking delivery. Specifically, the possibility of predicting the accuracy of MLC tracking delivery based on the plan modulation was investigated. Inversely optimized arc radiotherapy plans were created on CT-data of three lung cancer patients. For each case, five plans with a single 358° arc were generated with LPC priorities of 0 (no LPC), 0.25, 0.5, 0.75, and 1 (highest possible LPC), respectively. All the plans had a prescribed dose of 2 Gy × 30, used 6 MV, a maximum dose rate of 600 MU/min and a collimator angle of 45° or 315°. To quantify the plan modulation, an average adjacent leaf distance (ALD) was calculated by averaging the mean adjacent leaf distance for each control point. The linear relationship between the plan quality [i.e., the calculated dose distributions and the number of monitor units (MU)] and the LPC was investigated, and the linear regression coefficient as well as a two tailed confidence level of 95% was used in the evaluation. The effect of the plan modulation on the performance of MLC tracking was tested by delivering the plans to a cylindrical diode array phantom moving with sinusoidal motion in the superior-inferior direction with a peak-to-peak displacement of 2 cm and a cycle time of 6 s. The delivery was adjusted to the target motion using MLC tracking, guided in real-time by an infrared optical system. The dosimetric results were evaluated using gamma index evaluation with static target measurements as reference. The plan quality parameters did not depend significantly on the LPC (p ≥ 0.066), whereas the ALD depended significantly on the LPC (p < 0.001). The gamma index failure rate depended significantly on the ALD, weighted to the percentage of the beam delivered in each control point of the plan (ALD(w)) when MLC tracking was used (p < 0.001), but not for delivery without MLC tracking (p ≥ 0.342). The gamma index failure rate with the criteria of 2% and 2 mm was decreased from > 33.9% without MLC tracking to <31.4% (LPC 0) and <2.2% (LPC 1) with MLC tracking. The results indicate that the dosimetric robustness of MLC tracking delivery of an inversely optimized arc radiotherapy plan can be improved by incorporating leaf position constraints in the objective function without otherwise affecting the plan quality. The dosimetric robustness may be estimated prior to delivery by evaluating the ALD(w) of the plan.

  10. Evaluation of a Chlamydia trachomatis-specific, commercial, real-time PCR for use with ocular swabs.

    PubMed

    Pickering, Harry; Holland, Martin J; Last, Anna R; Burton, Matthew J; Burr, Sarah E

    2018-02-20

    Trachoma, the leading infectious cause of blindness worldwide, is caused by conjunctival Chlamydia trachomatis infection. Trachoma is diagnosed clinically by observation of conjunctival inflammation and/or scarring; however, there is evidence that monitoring C. trachomatis infection may be required for elimination programmes. There are many commercial and 'in-house' nucleic acid amplification tests for the detection of C. trachomatis DNA, but the majority have not been validated for use with ocular swabs. This study evaluated a commercial assay, the Fast-Track Vaginal swab kit, using conjunctival samples from trachoma-endemic areas. An objective, biostatistical-based method for binary classification of continuous PCR data was developed, to limit potential user-bias in diagnostic settings. The Fast-Track Vaginal swab assay was run on 210 ocular swab samples from Guinea-Bissau and Tanzania. Fit of individual amplification curves to exponential or sigmoid models, derivative and second derivative of the curves and final fluorescence value were examined for utility in thresholding for determining positivity. The results from the Fast-Track Vaginal swab assay were evaluated against a commercial test (Amplicor CT/NG) and a non-commercial test (in-house droplet digital PCR), both of whose performance has previously been evaluated. Significant evidence of exponential amplification (R 2  > 0.99) and final fluorescence > 0.15 were combined for thresholding. This objective approach identified a population of positive samples, however there were a subset of samples that amplified towards the end of the cycling protocol (at or later than 35 cycles), which were less clearly defined. The Fast-Track Vaginal swab assay showed good sensitivity against the commercial (95.71) and non-commercial (97.18) tests. Specificity was lower against both (90.00 and 96.55, respectively). This study defined a simple, automated protocol for binary classification of continuous, real-time qPCR data, for use in an end-point diagnostic test. This method identified a population of positive samples, however, as with manual thresholding, a subset of samples that amplified towards the end of the cycling program were less easily classified. When used with ocular swabs, the Fast-Track Vaginal swab assay had good sensitivity for C. trachomatis detection, but lower specificity than the commercial and non-commercial assays it was evaluated against, possibly leading to false positives.

  11. Tracking the Spatiotemporal Neural Dynamics of Real-world Object Size and Animacy in the Human Brain.

    PubMed

    Khaligh-Razavi, Seyed-Mahdi; Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2018-06-07

    Animacy and real-world size are properties that describe any object and thus bring basic order into our perception of the visual world. Here, we investigated how the human brain processes real-world size and animacy. For this, we applied representational similarity to fMRI and MEG data to yield a view of brain activity with high spatial and temporal resolutions, respectively. Analysis of fMRI data revealed that a distributed and partly overlapping set of cortical regions extending from occipital to ventral and medial temporal cortex represented animacy and real-world size. Within this set, parahippocampal cortex stood out as the region representing animacy and size stronger than most other regions. Further analysis of the detailed representational format revealed differences among regions involved in processing animacy. Analysis of MEG data revealed overlapping temporal dynamics of animacy and real-world size processing starting at around 150 msec and provided the first neuromagnetic signature of real-world object size processing. Finally, to investigate the neural dynamics of size and animacy processing simultaneously in space and time, we combined MEG and fMRI with a novel extension of MEG-fMRI fusion by representational similarity. This analysis revealed partly overlapping and distributed spatiotemporal dynamics, with parahippocampal cortex singled out as a region that represented size and animacy persistently when other regions did not. Furthermore, the analysis highlighted the role of early visual cortex in representing real-world size. A control analysis revealed that the neural dynamics of processing animacy and size were distinct from the neural dynamics of processing low-level visual features. Together, our results provide a detailed spatiotemporal view of animacy and size processing in the human brain.

  12. Learning an intrinsic-variable preserving manifold for dynamic visual tracking.

    PubMed

    Qiao, Hong; Zhang, Peng; Zhang, Bo; Zheng, Suiwu

    2010-06-01

    Manifold learning is a hot topic in the field of computer science, particularly since nonlinear dimensionality reduction based on manifold learning was proposed in Science in 2000. The work has achieved great success. The main purpose of current manifold-learning approaches is to search for independent intrinsic variables underlying high dimensional inputs which lie on a low dimensional manifold. In this paper, a new manifold is built up in the training step of the process, on which the input training samples are set to be close to each other if the values of their intrinsic variables are close to each other. Then, the process of dimensionality reduction is transformed into a procedure of preserving the continuity of the intrinsic variables. By utilizing the new manifold, the dynamic tracking of a human who can move and rotate freely is achieved. From the theoretical point of view, it is the first approach to transfer the manifold-learning framework to dynamic tracking. From the application point of view, a new and low dimensional feature for visual tracking is obtained and successfully applied to the real-time tracking of a free-moving object from a dynamic vision system. Experimental results from a dynamic tracking system which is mounted on a dynamic robot validate the effectiveness of the new algorithm.

  13. Track-to-track association for object matching in an inter-vehicle communication system

    NASA Astrophysics Data System (ADS)

    Yuan, Ting; Roth, Tobias; Chen, Qi; Breu, Jakob; Bogdanovic, Miro; Weiss, Christian A.

    2015-09-01

    Autonomous driving poses unique challenges for vehicle environment perception due to the complex driving environment the autonomous vehicle finds itself in and differentiates from remote vehicles. Due to inherent uncertainty of the traffic environments and incomplete knowledge due to sensor limitation, an autonomous driving system using only local onboard sensor information is generally not sufficiently enough for conducting a reliable intelligent driving with guaranteed safety. In order to overcome limitations of the local (host) vehicle sensing system and to increase the likelihood of correct detections and classifications, collaborative information from cooperative remote vehicles could substantially facilitate effectiveness of vehicle decision making process. Dedicated Short Range Communication (DSRC) system provides a powerful inter-vehicle wireless communication channel to enhance host vehicle environment perceiving capability with the aid of transmitted information from remote vehicles. However, there is a major challenge before one can fuse the DSRC-transmitted remote information and host vehicle Radar-observed information (in the present case): the remote DRSC data must be correctly associated with the corresponding onboard Radar data; namely, an object matching problem. Direct raw data association (i.e., measurement-to-measurement association - M2MA) is straightforward but error-prone, due to inherent uncertain nature of the observation data. The uncertainties could lead to serious difficulty in matching decision, especially, using non-stationary data. In this study, we present an object matching algorithm based on track-to-track association (T2TA) and evaluate the proposed approach with prototype vehicles in real traffic scenarios. To fully exploit potential of the DSRC system, only GPS position data from remote vehicle are used in fusion center (at host vehicle), i.e., we try to get what we need from the least amount of information; additional feature information can help the data association but are not currently considered. Comparing to M2MA, benefits of the T2TA object matching approach are: i) tracks taking into account important statistical information can provide more reliable inference results; ii) the track-formed smoothed trajectories can be used for an easier shape matching; iii) each local vehicle can design its own tracker and sends only tracks to fusion center to alleviate communication constraints. A real traffic study with different driving environments, based on a statistical hypothesis test, shows promising object matching results of significant practical implications.

  14. Development of Waypoint Planning Tool in Response to NASA Field Campaign Challenges

    NASA Technical Reports Server (NTRS)

    He, Matt; Hardin, Danny; Conover, Helen; Graves, Sara; Meyer, Paul; Blakeslee, Richard; Goodman, Michael

    2012-01-01

    Airborne real time observations are a major component of NASA's Earth Science research and satellite ground validation studies. For mission scientists, planning a research aircraft mission within the context of meeting the science objectives is a complex task because it requires real time situational awareness of the weather conditions that affect the aircraft track. Multiple aircrafts are often involved in NASA field campaigns. The coordination of the aircrafts with satellite overpasses, other airplanes and the constantly evolving, dynamic weather conditions often determines the success of the campaign. A flight planning tool is needed to provide situational awareness information to the mission scientists, and help them plan and modify the flight tracks. Scientists at the University of Alabama-Huntsville and the NASA Marshall Space Flight Center developed the Waypoint Planning Tool, an interactive software tool that enables scientists to develop their own flight plans (also known as waypoints) with point -and-click mouse capabilities on a digital map filled with real time raster and vector data. The development of this Waypoint Planning Tool demonstrates the significance of mission support in responding to the challenges presented during NASA field campaigns. Analysis during and after each campaign helped identify both issues and new requirements, and initiated the next wave of development. Currently the Waypoint Planning Tool has gone through three rounds of development and analysis processes. The development of this waypoint tool is directly affected by the technology advances on GIS/Mapping technologies. From the standalone Google Earth application and simple KML functionalities, to Google Earth Plugin and Java Web Start/Applet on web platform, and to the rising open source GIS tools with new JavaScript frameworks, the Waypoint Planning Tool has entered its third phase of technology advancement. The newly innovated, cross ]platform, modular designed JavaScript ]controlled Way Point Tool is planned to be integrated with NASA Airborne Science Mission Tool Suite. Adapting new technologies for the Waypoint Planning Tool ensures its success in helping scientists reach their mission objectives. This presentation will discuss the development processes of the Waypoint Planning Tool in responding to field campaign challenges, identify new information technologies, and describe the capabilities and features of the Waypoint Planning Tool with the real time aspect, interactive nature, and the resultant benefits to the airborne science community.

  15. Development of Way Point Planning Tool in Response to NASA Field Campaign Challenges

    NASA Astrophysics Data System (ADS)

    He, M.; Hardin, D. M.; Conover, H.; Graves, S. J.; Meyer, P.; Blakeslee, R. J.; Goodman, M. L.

    2012-12-01

    Airborne real time observations are a major component of NASA's Earth Science research and satellite ground validation studies. For mission scientists, planning a research aircraft mission within the context of meeting the science objectives is a complex task because it requires real time situational awareness of the weather conditions that affect the aircraft track. Multiple aircrafts are often involved in NASA field campaigns. The coordination of the aircrafts with satellite overpasses, other airplanes and the constantly evolving, dynamic weather conditions often determines the success of the campaign. A flight planning tool is needed to provide situational awareness information to the mission scientists, and help them plan and modify the flight tracks. Scientists at the University of Alabama-Huntsville and the NASA Marshall Space Flight Center developed the Waypoint Planning Tool, an interactive software tool that enables scientists to develop their own flight plans (also known as waypoints) with point-and-click mouse capabilities on a digital map filled with real time raster and vector data. The development of this Waypoint Planning Tool demonstrates the significance of mission support in responding to the challenges presented during NASA field campaigns. Analysis during and after each campaign helped identify both issues and new requirements, and initiated the next wave of development. Currently the Waypoint Planning Tool has gone through three rounds of development and analysis processes. The development of this waypoint tool is directly affected by the technology advances on GIS/Mapping technologies. From the standalone Google Earth application and simple KML functionalities, to Google Earth Plugin and Java Web Start/Applet on web platform, and to the rising open source GIS tools with new JavaScript frameworks, the Waypoint Planning Tool has entered its third phase of technology advancement. The newly innovated, cross-platform, modular designed JavaScript-controlled Way Point Tool is planned to be integrated with NASA Airborne Science Mission Tool Suite. Adapting new technologies for the Waypoint Planning Tool ensures its success in helping scientists reach their mission objectives. This presentation will discuss the development processes of the Waypoint Planning Tool in responding to field campaign challenges, identify new information technologies, and describe the capabilities and features of the Waypoint Planning Tool with the real time aspect, interactive nature, and the resultant benefits to the airborne science community.

  16. Self-paced model learning for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin

    2017-01-01

    In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.

  17. A Novel Real-Time Path Servo Control of a Hardware-in-the-Loop for a Large-Stroke Asymmetric Rod-Less Pneumatic System under Variable Loads.

    PubMed

    Lin, Hao-Ting

    2017-06-04

    This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC) is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally.

  18. A Novel Real-Time Path Servo Control of a Hardware-in-the-Loop for a Large-Stroke Asymmetric Rod-Less Pneumatic System under Variable Loads

    PubMed Central

    Lin, Hao-Ting

    2017-01-01

    This project aims to develop a novel large stroke asymmetric pneumatic servo system of a hardware-in-the-loop for path tracking control under variable loads based on the MATLAB Simulink real-time system. High pressure compressed air provided by the air compressor is utilized for the pneumatic proportional servo valve to drive the large stroke asymmetric rod-less pneumatic actuator. Due to the pressure differences between two chambers, the pneumatic actuator will operate. The highly nonlinear mathematical models of the large stroke asymmetric pneumatic system were analyzed and developed. The functional approximation technique based on the sliding mode controller (FASC) is developed as a controller to solve the uncertain time-varying nonlinear system. The MATLAB Simulink real-time system was a main control unit of a hardware-in-the-loop system proposed to establish driver blocks for analog and digital I/O, a linear encoder, a CPU and a large stroke asymmetric pneumatic rod-less system. By the position sensor, the position signals of the cylinder will be measured immediately. The measured signals will be viewed as the feedback signals of the pneumatic servo system for the study of real-time positioning control and path tracking control. Finally, real-time control of a large stroke asymmetric pneumatic servo system with measuring system, a large stroke asymmetric pneumatic servo system, data acquisition system and the control strategy software will be implemented. Thus, upgrading the high position precision and the trajectory tracking performance of the large stroke asymmetric pneumatic servo system will be realized to promote the high position precision and path tracking capability. Experimental results show that fifth order paths in various strokes and the sine wave path are successfully implemented in the test rig. Also, results of variable loads under the different angle were implemented experimentally. PMID:28587220

  19. Evaluating a robust contour tracker on echocardiographic sequences.

    PubMed

    Jacob, G; Noble, J A; Mulet-Parada, M; Blake, A

    1999-03-01

    In this paper we present an evaluation of a robust visual image tracker on echocardiographic image sequences. We show how the tracking framework can be customized to define an appropriate shape space that describes heart shape deformations that can be learnt from a training data set. We also investigate energy-based temporal boundary enhancement methods to improve image feature measurement. Results are presented demonstrating real-time tracking on real normal heart motion data sequences and abnormal synthesized and real heart motion data sequences. We conclude by discussing some of our current research efforts.

  20. Driver face tracking using semantics-based feature of eyes on single FPGA

    NASA Astrophysics Data System (ADS)

    Yu, Ying-Hao; Chen, Ji-An; Ting, Yi-Siang; Kwok, Ngaiming

    2017-06-01

    Tracking driver's face is one of the essentialities for driving safety control. This kind of system is usually designed with complicated algorithms to recognize driver's face by means of powerful computers. The design problem is not only about detecting rate but also from parts damages under rigorous environments by vibration, heat, and humidity. A feasible strategy to counteract these damages is to integrate entire system into a single chip in order to achieve minimum installation dimension, weight, power consumption, and exposure to air. Meanwhile, an extraordinary methodology is also indispensable to overcome the dilemma of low-computing capability and real-time performance on a low-end chip. In this paper, a novel driver face tracking system is proposed by employing semantics-based vague image representation (SVIR) for minimum hardware resource usages on a FPGA, and the real-time performance is also guaranteed at the same time. Our experimental results have indicated that the proposed face tracking system is viable and promising for the smart car design in the future.

  1. A novel track-before-detect algorithm based on optimal nonlinear filtering for detecting and tracking infrared dim target

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Gao, Kun; Liu, Ying; Han, Lu

    2015-08-01

    Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.

  2. SU-G-BRA-02: Development of a Learning Based Block Matching Algorithm for Ultrasound Tracking in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, A; Bednarz, B

    Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localizedmore » block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA190298.« less

  3. Contribution of explosion and future collision fragments to the orbital debris environment

    NASA Technical Reports Server (NTRS)

    Su, S.-Y.; Kessler, D. J.

    1985-01-01

    The time evolution of the near-earth man-made orbital debris environment modeled by numerical simulation is presented in this paper. The model starts with a data base of orbital debris objects which are tracked by the NORAD ground radar system. The current untrackable small objects are assumed to result from explosions and are predicted from data collected from a ground explosion experiment. Future collisions between earth orbiting objects are handled by the Monte Carlo method to simulate the range of collision possibilities that may occur in the real world. The collision fragmentation process between debris objects is calculated using an empirical formula derived from a laboratory spacecraft impact experiment to obtain the number versus size distribution of the newly generated debris population. The evolution of the future space debris environment is compared with the natural meteoroid background for the relative spacecraft penetration hazard.

  4. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    ERIC Educational Resources Information Center

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  5. Real-time real-sky dual-conjugate adaptive optics experiment

    NASA Astrophysics Data System (ADS)

    Knutsson, Per; Owner-Petersen, Mette

    2006-06-01

    The current status of a real-time real-sky dual-conjugate adaptive optics experiment is presented. This experiment is a follow-up on a lab experiment at Lund Observatory that demonstrated dual-conjugate adaptive optics on a static atmosphere. The setup is to be placed at Lund Observatory. This means that the setup will be available 24h a day and does not have to share time with other instruments. The optical design of the experiment is finalized. A siderostat will be used to track the guide object and all other optical components are placed on an optical table. A small telescope, 35 cm aperture, is used and following this a tip-tilt mirror and two deformable mirrors are placed. The wave-front sensor is a Shack-Hartmann sensor using a SciMeasure Li'l Joe CCD39 camera system. The maximum update rate of the setup will be 0.5 kHz and the control system will be running under Linux. The effective wavelength will be 750 nm. All components in the setup have been acquired and the completion of the setup is underway. Collaborating partners in this project are the Applied Optics Group at National University of Ireland, Galway and the Swedish Defense Research Agency.

  6. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm

    PubMed Central

    Tombu, Michael

    2014-01-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target–distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking—one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone. PMID:21264704

  7. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i

    USGS Publications Warehouse

    Patrick, Matthew R.; Swanson, Don; Orr, Tim R.

    2016-01-01

    Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.

  8. Objects Architecture: A Comprehensive Design Approach for Real-Time, Distributed, Fault-Tolerant, Reactive Operating Systems.

    DTIC Science & Technology

    1987-09-01

    real - time operating system should be efficient from the real-time point...5,8]) system naming scheme. 3.2 Protecting Objects Real-time embedded systems usually neglect protection mechanisms. However, a real - time operating system cannot...allocation mechanism should adhere to application constraints. This strong relationship between a real - time operating system and the application

  9. Novel intelligent real-time position tracking system using FPGA and fuzzy logic.

    PubMed

    Soares dos Santos, Marco P; Ferreira, J A F

    2014-03-01

    The main aim of this paper is to test if FPGAs are able to achieve better position tracking performance than software-based soft real-time platforms. For comparison purposes, the same controller design was implemented in these architectures. A Multi-state Fuzzy Logic controller (FLC) was implemented both in a Xilinx(®) Virtex-II FPGA (XC2v1000) and in a soft real-time platform NI CompactRIO(®)-9002. The same sampling time was used. The comparative tests were conducted using a servo-pneumatic actuation system. Steady-state errors lower than 4 μm were reached for an arbitrary vertical positioning of a 6.2 kg mass when the controller was embedded into the FPGA platform. Performance gains up to 16 times in the steady-state error, up to 27 times in the overshoot and up to 19.5 times in the settling time were achieved by using the FPGA-based controller over the software-based FLC controller. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  10. SU-G-JeP3-10: Update On a Real-Time Treatment Guidance System Using An IR Navigation System for Pleural PDT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, M; Penjweini, R; Zhu, T

    Purpose: Photodynamic therapy (PDT) is used in conjunction with surgical debulking of tumorous tissue during treatment for pleural mesothelioma. One of the key components of effective PDT is uniform light distribution. Currently, light is monitored with 8 isotropic light detectors that are placed at specific locations inside the pleural cavity. A tracking system with real-time feedback software can be utilized to improve the uniformity of light in addition to the existing detectors. Methods: An infrared (IR) tracking camera is used to monitor the movement of the light source. The same system determines the pleural geometry of the treatment area. Softwaremore » upgrades allow visualization of the pleural cavity as a two-dimensional volume. The treatment delivery wand was upgraded for ease of light delivery while incorporating the IR system. Isotropic detector locations are also displayed. Data from the tracking system is used to calculate the light fluence rate delivered. This data is also compared with in vivo data collected via the isotropic detectors. Furthermore, treatment volume information will be used to form light dose volume histograms of the pleural cavity. Results: In a phantom study, the light distribution was improved by using real-time guidance compared to the distribution when using detectors without guidance. With the tracking system, 2D data can be collected regarding light fluence rather than just the 8 discrete locations inside the pleural cavity. Light fluence distribution on the entire cavity can be calculated at every time in the treatment. Conclusion: The IR camera has been used successfully during pleural PDT patient treatment to track the motion of the light source and provide real-time display of 2D light fluence. It is possible to use the feedback system to deliver a more uniform dose of light throughout the pleural cavity.« less

  11. Real-time tracking of liver motion and deformation using a flexible needle

    PubMed Central

    Lei, Peng; Moeslein, Fred; Wood, Bradford J.

    2012-01-01

    Purpose A real-time 3D image guidance system is needed to facilitate treatment of liver masses using radiofrequency ablation, for example. This study investigates the feasibility and accuracy of using an electromagnetically tracked flexible needle inserted into the liver to track liver motion and deformation. Methods This proof-of-principle study was conducted both ex vivo and in vivo with a CT scanner taking the place of an electromagnetic tracking system as the spatial tracker. Deformations of excised livers were artificially created by altering the shape of the stage on which the excised livers rested. Free breathing or controlled ventilation created deformations of live swine livers. The positions of the needle and test targets were determined through CT scans. The shape of the needle was reconstructed using data simulating multiple embedded electromagnetic sensors. Displacement of liver tissues in the vicinity of the needle was derived from the change in the reconstructed shape of the needle. Results The needle shape was successfully reconstructed with tracking information of two on-needle points. Within 30 mm of the needle, the registration error of implanted test targets was 2.4 ± 1.0 mm ex vivo and 2.8 ± 1.5 mm in vivo. Conclusion A practical approach was developed to measure the motion and deformation of the liver in real time within a region of interest. The approach relies on redesigning the often-used seeker needle to include embedded electromagnetic tracking sensors. With the nonrigid motion and deformation information of the tracked needle, a single- or multimodality 3D image of the intraprocedural liver, now clinically obtained with some delay, can be updated continuously to monitor intraprocedural changes in hepatic anatomy. This capability may be useful in radiofrequency ablation and other percutaneous ablative procedures. PMID:20700662

  12. Super-resolution imaging applied to moving object tracking

    NASA Astrophysics Data System (ADS)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  13. The first clinical implementation of electromagnetic transponder-guided MLC tracking.

    PubMed

    Keall, Paul J; Colvill, Emma; O'Brien, Ricky; Ng, Jin Aun; Poulsen, Per Rugaard; Eade, Thomas; Kneebone, Andrew; Booth, Jeremy T

    2014-02-01

    We report on the clinical process, quality assurance, and geometric and dosimetric results of the first clinical implementation of electromagnetic transponder-guided MLC tracking which occurred on 28 November 2013 at the Northern Sydney Cancer Centre. An electromagnetic transponder-based positioning system (Calypso) was modified to send the target position output to in-house-developed MLC tracking code, which adjusts the leaf positions to optimally align the treatment beam with the real-time target position. Clinical process and quality assurance procedures were developed and performed. The first clinical implementation of electromagnetic transponder-guided MLC tracking was for a prostate cancer patient being treated with dual-arc VMAT (RapidArc). For the first fraction of the first patient treatment of electromagnetic transponder-guided MLC tracking we recorded the in-room time and transponder positions, and performed dose reconstruction to estimate the delivered dose and also the dose received had MLC tracking not been used. The total in-room time was 21 min with 2 min of beam delivery. No additional time was needed for MLC tracking and there were no beam holds. The average prostate position from the initial setup was 1.2 mm, mostly an anterior shift. Dose reconstruction analysis of the delivered dose with MLC tracking showed similar isodose and target dose volume histograms to the planned treatment and a 4.6% increase in the fractional rectal V60. Dose reconstruction without motion compensation showed a 30% increase in the fractional rectal V60 from that planned, even for the small motion. The real-time beam-target correction method, electromagnetic transponder-guided MLC tracking, has been translated to the clinic. This achievement represents a milestone in improving geometric and dosimetric accuracy, and by inference treatment outcomes, in cancer radiotherapy.

  14. The first clinical implementation of electromagnetic transponder-guided MLC tracking

    PubMed Central

    Keall, Paul J.; Colvill, Emma; O’Brien, Ricky; Ng, Jin Aun; Poulsen, Per Rugaard; Eade, Thomas; Kneebone, Andrew; Booth, Jeremy T.

    2014-01-01

    Purpose: We report on the clinical process, quality assurance, and geometric and dosimetric results of the first clinical implementation of electromagnetic transponder-guided MLC tracking which occurred on 28 November 2013 at the Northern Sydney Cancer Centre. Methods: An electromagnetic transponder-based positioning system (Calypso) was modified to send the target position output to in-house-developed MLC tracking code, which adjusts the leaf positions to optimally align the treatment beam with the real-time target position. Clinical process and quality assurance procedures were developed and performed. The first clinical implementation of electromagnetic transponder-guided MLC tracking was for a prostate cancer patient being treated with dual-arc VMAT (RapidArc). For the first fraction of the first patient treatment of electromagnetic transponder-guided MLC tracking we recorded the in-room time and transponder positions, and performed dose reconstruction to estimate the delivered dose and also the dose received had MLC tracking not been used. Results: The total in-room time was 21 min with 2 min of beam delivery. No additional time was needed for MLC tracking and there were no beam holds. The average prostate position from the initial setup was 1.2 mm, mostly an anterior shift. Dose reconstruction analysis of the delivered dose with MLC tracking showed similar isodose and target dose volume histograms to the planned treatment and a 4.6% increase in the fractional rectal V60. Dose reconstruction without motion compensation showed a 30% increase in the fractional rectal V60 from that planned, even for the small motion. Conclusions: The real-time beam-target correction method, electromagnetic transponder-guided MLC tracking, has been translated to the clinic. This achievement represents a milestone in improving geometric and dosimetric accuracy, and by inference treatment outcomes, in cancer radiotherapy. PMID:24506591

  15. Activity-based exploitation of Full Motion Video (FMV)

    NASA Astrophysics Data System (ADS)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  16. Real-time x-ray fluoroscopy-based catheter detection and tracking for cardiac electrophysiology interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Yingliang; Housden, R. James; Razavi, Reza

    2013-07-15

    Purpose: X-ray fluoroscopically guided cardiac electrophysiology (EP) procedures are commonly carried out to treat patients with arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of a three-dimensional (3D) roadmap derived from preprocedural volumetric images can be used to add anatomical information. It is useful to know the position of the catheter electrodes relative to the cardiac anatomy, for example, to record ablation therapy locations during atrial fibrillation therapy. Also, the electrode positions of the coronary sinus (CS) catheter or lasso catheter can be used for road map motion correction.Methods: In this paper, the authors presentmore » a novel unified computational framework for image-based catheter detection and tracking without any user interaction. The proposed framework includes fast blob detection, shape-constrained searching and model-based detection. In addition, catheter tracking methods were designed based on the customized catheter models input from the detection method. Three real-time detection and tracking methods are derived from the computational framework to detect or track the three most common types of catheters in EP procedures: the ablation catheter, the CS catheter, and the lasso catheter. Since the proposed methods use the same blob detection method to extract key information from x-ray images, the ablation, CS, and lasso catheters can be detected and tracked simultaneously in real-time.Results: The catheter detection methods were tested on 105 different clinical fluoroscopy sequences taken from 31 clinical procedures. Two-dimensional (2D) detection errors of 0.50 {+-} 0.29, 0.92 {+-} 0.61, and 0.63 {+-} 0.45 mm as well as success rates of 99.4%, 97.2%, and 88.9% were achieved for the CS catheter, ablation catheter, and lasso catheter, respectively. With the tracking method, accuracies were increased to 0.45 {+-} 0.28, 0.64 {+-} 0.37, and 0.53 {+-} 0.38 mm and success rates increased to 100%, 99.2%, and 96.5% for the CS, ablation, and lasso catheters, respectively. Subjective clinical evaluation by three experienced electrophysiologists showed that the detection and tracking results were clinically acceptable.Conclusions: The proposed detection and tracking methods are automatic and can detect and track CS, ablation, and lasso catheters simultaneously and in real-time. The accuracy of the proposed methods is sub-mm and the methods are robust toward low-dose x-ray fluoroscopic images, which are mainly used during EP procedures to maintain low radiation dose.« less

  17. Fully Distributed Monitoring Architecture Supporting Multiple Trackees and Trackers in Indoor Mobile Asset Management Application

    PubMed Central

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-01-01

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407

  18. A robust and non-obtrusive automatic event tracking system for operating room management to improve patient care.

    PubMed

    Huang, Albert Y; Joerger, Guillaume; Salmon, Remi; Dunkin, Brian; Sherman, Vadim; Bass, Barbara L; Garbey, Marc

    2016-08-01

    Optimization of OR management is a complex problem as each OR has different procedures throughout the day inevitably resulting in scheduling delays, variations in time durations and overall suboptimal performance. There exists a need for a system that automatically tracks procedural progress in real time in the OR. This would allow for efficient monitoring of operating room states and target sources of inefficiency and points of improvement. We placed three wireless sensors (floor-mounted pressure sensor, ventilator-mounted bellows motion sensor and ambient light detector, and a general room motion detector) in two ORs at our institution and tracked cases 24 h a day for over 4 months. We collected data on 238 total cases (107 laparoscopic cases). A total of 176 turnover times were also captured, and we found that the average turnover time between cases was 35 min while the institutional goal was 30 min. Deeper examination showed that 38 % of laparoscopic cases had some aspect of suboptimal activity with the time between extubation and patient exiting the OR being the biggest contributor (16 %). Our automated system allows for robust, wireless real-time OR monitoring as well as data collection and retrospective data analyses. We plan to continue expanding our system and to project the data in real time for all OR personnel to see. At the same time, we plan on adding key pieces of technology such as RFID and other radio-frequency systems to track patients and physicians to further increase efficiency and patient safety.

  19. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  20. Moving Object Detection in Heterogeneous Conditions in Embedded Systems.

    PubMed

    Garbo, Alessandro; Quer, Stefano

    2017-07-01

    This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates.

  1. Moving Object Detection in Heterogeneous Conditions in Embedded Systems

    PubMed Central

    Garbo, Alessandro

    2017-01-01

    This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates. PMID:28671582

  2. Development of a real-time wave field reconstruction TEM system (II): correction of coma aberration and 3-fold astigmatism, and real-time correction of 2-fold astigmatism.

    PubMed

    Tamura, Takahiro; Kimura, Yoshihide; Takai, Yoshizo

    2018-02-01

    In this study, a function for the correction of coma aberration, 3-fold astigmatism and real-time correction of 2-fold astigmatism was newly incorporated into a recently developed real-time wave field reconstruction TEM system. The aberration correction function was developed by modifying the image-processing software previously designed for auto focus tracking, as described in the first article of this series. Using the newly developed system, the coma aberration and 3-fold astigmatism were corrected using the aberration coefficients obtained experimentally before the processing was carried out. In this study, these aberration coefficients were estimated from an apparent 2-fold astigmatism induced under tilted-illumination conditions. In contrast, 2-fold astigmatism could be measured and corrected in real time from the reconstructed wave field. Here, the measurement precision for 2-fold astigmatism was found to be ±0.4 nm and ±2°. All of these aberration corrections, as well as auto focus tracking, were performed at a video frame rate of 1/30 s. Thus, the proposed novel system is promising for quantitative and reliable in situ observations, particularly in environmental TEM applications.

  3. Development of SPIES (Space Intelligent Eyeing System) for smart vehicle tracing and tracking

    NASA Astrophysics Data System (ADS)

    Abdullah, Suzanah; Ariffin Osoman, Muhammad; Guan Liyong, Chua; Zulfadhli Mohd Noor, Mohd; Mohamed, Ikhwan

    2016-06-01

    SPIES or Space-based Intelligent Eyeing System is an intelligent technology which can be utilized for various applications such as gathering spatial information of features on Earth, tracking system for the movement of an object, tracing system to trace the history information, monitoring driving behavior, security and alarm system as an observer in real time and many more. SPIES as will be developed and supplied modularly will encourage the usage based on needs and affordability of users. SPIES are a complete system with camera, GSM, GPS/GNSS and G-Sensor modules with intelligent function and capabilities. Mainly the camera is used to capture pictures and video and sometimes with audio of an event. Its usage is not limited to normal use for nostalgic purpose but can be used as a reference for security and material of evidence when an undesirable event such as crime occurs. When integrated with space based technology of the Global Navigational Satellite System (GNSS), photos and videos can be recorded together with positioning information. A product of the integration of these technologies when integrated with Information, Communication and Technology (ICT) and Geographic Information System (GIS) will produce innovation in the form of information gathering methods in still picture or video with positioning information that can be conveyed in real time via the web to display location on the map hence creating an intelligent eyeing system based on space technology. The importance of providing global positioning information is a challenge but overcome by SPIES even in areas without GNSS signal reception for the purpose of continuous tracking and tracing capability

  4. A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking

    PubMed Central

    Shafiee, Mohammad Javad; Azimifar, Zohreh; Wong, Alexander

    2015-01-01

    In this work, we introduce a deep-structured conditional random field (DS-CRF) model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering. PMID:26313943

  5. Augmented Virtual Reality Laboratory

    NASA Technical Reports Server (NTRS)

    Tully-Hanson, Benjamin

    2015-01-01

    Real time motion tracking hardware has for the most part been cost prohibitive for research to regularly take place until recently. With the release of the Microsoft Kinect in November 2010, researchers now have access to a device that for a few hundred dollars is capable of providing redgreenblue (RGB), depth, and skeleton data. It is also capable of tracking multiple people in real time. For its original intended purposes, i.e. gaming, being used with the Xbox 360 and eventually Xbox One, it performs quite well. However, researchers soon found that although the sensor is versatile, it has limitations in real world applications. I was brought aboard this summer by William Little in the Augmented Virtual Reality (AVR) Lab at Kennedy Space Center to find solutions to these limitations.

  6. Enabling private and public sector organizations as agents of homeland security

    NASA Astrophysics Data System (ADS)

    Glassco, David H. J.; Glassco, Jordan C.

    2006-05-01

    Homeland security and defense applications seek to reduce the risk of undesirable eventualities across physical space in real-time. With that functional requirement in mind, our work focused on the development of IP based agent telecommunication solutions for heterogeneous sensor / robotic intelligent "Things" that could be deployed across the internet. This paper explains how multi-organization information and device sharing alliances may be formed to enable organizations to act as agents of homeland security (in addition to other uses). Topics include: (i) using location-aware, agent based, real-time information sharing systems to integrate business systems, mobile devices, sensor and actuator based devices and embedded devices used in physical infrastructure assets, equipment and other man-made "Things"; (ii) organization-centric real-time information sharing spaces using on-demand XML schema formatted networks; (iii) object-oriented XML serialization as a methodology for heterogeneous device glue code; (iv) how complex requirements for inter / intra organization information and device ownership and sharing, security and access control, mobility and remote communication service, tailored solution life cycle management, service QoS, service and geographic scalability and the projection of remote physical presence (through sensing and robotics) and remote informational presence (knowledge of what is going elsewhere) can be more easily supported through feature inheritance with a rapid agent system development methodology; (v) how remote object identification and tracking can be supported across large areas; (vi) how agent synergy may be leveraged with analytics to complement heterogeneous device networks.

  7. Label-free evanescent microscopy for membrane nano-tomography in living cells.

    PubMed

    Bon, Pierre; Barroca, Thomas; Lévèque-Fort, Sandrine; Fort, Emmanuel

    2014-11-01

    We show that through-the-objective evanescent microscopy (epi-EM) is a powerful technique to image membranes in living cells. Readily implementable on a standard inverted microscope, this technique enables full-field and real-time tracking of membrane processes without labeling and thus signal fading. In addition, we demonstrate that the membrane/interface distance can be retrieved with 10 nm precision using a multilayer Fresnel model. We apply this nano-axial tomography of living cell membranes to retrieve quantitative information on membrane invagination dynamics. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  8. Optically Phase-Locked Electronic Speckle Pattern Interferometer (OPL-ESPI)

    NASA Astrophysics Data System (ADS)

    Moran, Steven E.; Law, Robert L.; Craig, Peter N.; Goldberg, Warren M.

    1986-10-01

    This report describes the design, theory, operation, and characteristics of the OPL-ESPI, which generates real time equal Doppler speckle contours of vibrating objects from unstable sensor platforms with a Doppler resolution of 30 Hz and a maximum tracking range of + or - 5 HMz. The optical phase locked loop compensates for the deleterious effects of ambient background vibration and provides the bases for a new ESPI video signal processing technique, which produces high contrast speckle contours. The OPL-ESPI system has local oscillator phase modulation capability, offering the potential for detection of vibrations with the amplitudes less than lambda/100.

  9. Localized Detection of Abandoned Luggage

    NASA Astrophysics Data System (ADS)

    Chang, Jing-Ying; Liao, Huei-Hung; Chen, Liang-Gee

    2010-12-01

    Abandoned luggage represents a potential threat to public safety. Identifying objects as luggage, identifying the owners of such objects, and identifying whether owners have left luggage behind are the three main problems requiring solution. This paper proposes two techniques which are "foreground-mask sampling" to detect luggage with arbitrary appearance and "selective tracking" to locate and to track owners based solely on looking only at the neighborhood of the luggage. Experimental results demonstrate that once an owner abandons luggage and leaves the scene, the alarm fires within few seconds. The average processing speed of the approach is 17.37 frames per second, which is sufficient for real world applications.

  10. A Comparative Analysis of Three Monocular Passive Ranging Methods on Real Infrared Sequences

    NASA Astrophysics Data System (ADS)

    Bondžulić, Boban P.; Mitrović, Srđan T.; Barbarić, Žarko P.; Andrić, Milenko S.

    2013-09-01

    Three monocular passive ranging methods are analyzed and tested on the real infrared sequences. The first method exploits scale changes of an object in successive frames, while other two use Beer-Lambert's Law. Ranging methods are evaluated by comparing with simultaneously obtained reference data at the test site. Research is addressed on scenarios where multiple sensor views or active measurements are not possible. The results show that these methods for range estimation can provide the fidelity required for object tracking. Maximum values of relative distance estimation errors in near-ideal conditions are less than 8%.

  11. XpertTrack: Precision Autonomous Measuring Device Developed for Real Time Shipments Tracker

    PubMed Central

    Viman, Liviu; Daraban, Mihai; Fizesan, Raul; Iuonas, Mircea

    2016-01-01

    This paper proposes a software and hardware solution for real time condition monitoring applications. The proposed device, called XpertTrack, exchanges data through the GPRS protocol over a GSM network and monitories temperature and vibrations of critical merchandise during commercial shipments anywhere on the globe. Another feature of this real time tracker is to provide GPS and GSM positioning with a precision of 10 m or less. In order to interpret the condition of the merchandise, the data acquisition, analysis and visualization are done with 0.1 °C accuracy for the temperature sensor, and 10 levels of shock sensitivity for the acceleration sensor. In addition to this, the architecture allows increasing the number and the types of sensors, so that companies can use this flexible solution to monitor a large percentage of their fleet. PMID:26978360

  12. XpertTrack: Precision Autonomous Measuring Device Developed for Real Time Shipments Tracker.

    PubMed

    Viman, Liviu; Daraban, Mihai; Fizesan, Raul; Iuonas, Mircea

    2016-03-10

    This paper proposes a software and hardware solution for real time condition monitoring applications. The proposed device, called XpertTrack, exchanges data through the GPRS protocol over a GSM network and monitories temperature and vibrations of critical merchandise during commercial shipments anywhere on the globe. Another feature of this real time tracker is to provide GPS and GSM positioning with a precision of 10 m or less. In order to interpret the condition of the merchandise, the data acquisition, analysis and visualization are done with 0.1 °C accuracy for the temperature sensor, and 10 levels of shock sensitivity for the acceleration sensor. In addition to this, the architecture allows increasing the number and the types of sensors, so that companies can use this flexible solution to monitor a large percentage of their fleet.

  13. Real-Time Imaging System for the OpenPET

    NASA Astrophysics Data System (ADS)

    Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga

    2012-02-01

    The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.

  14. A Framework for Realistic Modeling and Display of Object Surface Appearance

    NASA Astrophysics Data System (ADS)

    Darling, Benjamin A.

    With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the real samples in the same measurement configuration. A chromatic adaptation experiment was performed to evaluate color appearance in the framework and explore the factors that contribute to differences when viewing self-luminous displays as opposed to reflective objects. A set of sample applications was developed to demonstrate the potential utility of the object display technology for digital proofing, psychophysical testing, and artwork display.

  15. Operational Data Quality Assessment of the Combined PBO, TLALOCNet and COCONet Real-Time GNSS Networks

    NASA Astrophysics Data System (ADS)

    Hodgkinson, K. M.; Mencin, D.; Fox, O.; Walls, C. P.; Mann, D.; Blume, F.; Berglund, H. T.; Phillips, D.; Meertens, C. M.; Mattioli, G. S.

    2015-12-01

    The GAGE facility, managed by UNAVCO, currently operates a network of ~460, real-time, high-rate GNSS stations (RT-GNSS). The majority of these RT stations are part of the Earthscope PBO network, which spans the western US Pacific North-American plate boundary. Approximately 50 are distributed throughout the Mexico and Caribbean region funded by the TLALOCNet and COCONet projects. The entire network is processed in real-time at UNAVCO using Precise Point Positioning (PPP). The real-time streams are freely available to all and user demand has grown almost exponentially since 2010. Data usage is multidisciplinary, including tectonic and volcanic deformation studies, meteorological applications, atmospheric science research in addition to use by national, state and commercial entities. 21 RT-GNSS sites in California now include 200-sps accelerometers for the development of Earthquake Early Warning systems. All categories of users of real-time streams have similar requirements, reliable, low-latency, high-rate, and complete data sets. To meet these requirements, UNAVCO tracks the latency and completeness of the incoming raw observations and also is developing tools to monitor the quality of the processed data streams. UNAVCO is currently assessing the precision, accuracy and latency of solutions from various PPP software packages. Also under review are the data formats UNAVCO distributes; for example, the PPP solutions are currently distributed in NMEA format, but other formats such as SEED or GeoJSON may be preferred by different user groups to achieve specific mission objectives. In this presentation we will share our experiences of the challenges involved in the data operations of a continental-scale, multi-project, real-time GNSS network, summarize the network's performance in terms of latency and completeness, and present the comparisons of PPP solutions using different PPP processing techniques.

  16. Forward collision warning based on kernelized correlation filters

    NASA Astrophysics Data System (ADS)

    Pu, Jinchuan; Liu, Jun; Zhao, Yong

    2017-07-01

    A vehicle detection and tracking system is one of the indispensable methods to reduce the occurrence of traffic accidents. The nearest vehicle is the most likely to cause harm to us. So, this paper will do more research on about the nearest vehicle in the region of interest (ROI). For this system, high accuracy, real-time and intelligence are the basic requirement. In this paper, we set up a system that combines the advanced KCF tracking algorithm with the HaarAdaBoost detection algorithm. The KCF algorithm reduces computation time and increase the speed through the cyclic shift and diagonalization. This algorithm satisfies the real-time requirement. At the same time, Haar features also have the same advantage of simple operation and high speed for detection. The combination of this two algorithm contribute to an obvious improvement of the system running rate comparing with previous works. The detection result of the HaarAdaBoost classifier provides the initial value for the KCF algorithm. This fact optimizes KCF algorithm flaws that manual car marking in the initial phase, which is more scientific and more intelligent. Haar detection and KCF tracking with Histogram of Oriented Gradient (HOG) ensures the accuracy of the system. We evaluate the performance of framework on dataset that were self-collected. The experimental results demonstrate that the proposed method is robust and real-time. The algorithm can effectively adapt to illumination variation, even in the night it can meet the detection and tracking requirements, which is an improvement compared with the previous work.

  17. Tracking of multiple targets using online learning for reference model adaptation.

    PubMed

    Pernkopf, Franz

    2008-12-01

    Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.

  18. Real time analysis of voiced sounds

    NASA Technical Reports Server (NTRS)

    Hong, J. P. (Inventor)

    1976-01-01

    A power spectrum analysis of the harmonic content of a voiced sound signal is conducted in real time by phase-lock-loop tracking of the fundamental frequency, (f sub 0) of the signal and successive harmonics (h sub 1 through h sub n) of the fundamental frequency. The analysis also includes measuring the quadrature power and phase of each frequency tracked, differentiating the power measurements of the harmonics in adjacent pairs, and analyzing successive differentials to determine peak power points in the power spectrum for display or use in analysis of voiced sound, such as for voice recognition.

  19. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  20. Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images

    NASA Astrophysics Data System (ADS)

    Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian

    2015-12-01

    With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.

  1. A real-time optical tracking and measurement processing system for flying targets.

    PubMed

    Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu

    2014-01-01

    Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control.

  2. A Real-Time Optical Tracking and Measurement Processing System for Flying Targets

    PubMed Central

    Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu

    2014-01-01

    Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control. PMID:24987748

  3. Transient imaging for real-time tracking around a corner

    NASA Astrophysics Data System (ADS)

    Klein, Jonathan; Laurenzis, Martin; Hullin, Matthias

    2016-10-01

    Non-line-of-sight imaging is a fascinating emerging area of research and expected to have an impact in numerous application fields including civilian and military sensing. Performance of human perception and situational awareness can be extended by the sensing of shapes and movement around a corner in future scenarios. Rather than seeing through obstacles directly, non-line-of-sight imaging relies on analyzing indirect reflections of light that traveled around the obstacle. In previous work, transient imaging was established as the key mechanic to enable the extraction of useful information from such reflections. So far, a number of different approaches based on transient imaging have been proposed, with back projection being the most prominent one. Different hardware setups were used for the acquisition of the required data, however all of them have severe drawbacks such as limited image quality, long capture time or very high prices. In this paper we propose the analysis of synthetic transient renderings to gain more insights into the transient light transport. With this simulated data, we are no longer bound to the imperfect data of real systems and gain more flexibility and control over the analysis. In a second part, we use the insights of our analysis to formulate a novel reconstruction algorithm. It uses an adapted light simulation to formulate an inverse problem which is solved in an analysis-by-synthesis fashion. Through rigorous optimization of the reconstruction, it then becomes possible to track known objects outside the line of side in real time. Due to the forward formulation of the light transport, the algorithm is easily expandable to more general scenarios or different hardware setups. We therefore expect it to become a viable alternative to the classic back projection approach in the future.

  4. Architecture for an integrated real-time air combat and sensor network simulation

    NASA Astrophysics Data System (ADS)

    Criswell, Evans A.; Rushing, John; Lin, Hong; Graves, Sara

    2007-04-01

    An architecture for an integrated air combat and sensor network simulation is presented. The architecture integrates two components: a parallel real-time sensor fusion and target tracking simulation, and an air combat simulation. By integrating these two simulations, it becomes possible to experiment with scenarios in which one or both sides in a battle have very large numbers of primitive passive sensors, and to assess the likely effects of those sensors on the outcome of the battle. Modern Air Power is a real-time theater-level air combat simulation that is currently being used as a part of the USAF Air and Space Basic Course (ASBC). The simulation includes a variety of scenarios from the Vietnam war to the present day, and also includes several hypothetical future scenarios. Modern Air Power includes a scenario editor, an order of battle editor, and full AI customization features that make it possible to quickly construct scenarios for any conflict of interest. The scenario editor makes it possible to place a wide variety of sensors including both high fidelity sensors such as radars, and primitive passive sensors that provide only very limited information. The parallel real-time sensor network simulation is capable of handling very large numbers of sensors on a computing cluster of modest size. It can fuse information provided by disparate sensors to detect and track targets, and produce target tracks.

  5. Preferential Inspection of Recent Real-World Events Over Future Events: Evidence from Eye Tracking during Spoken Sentence Comprehension

    PubMed Central

    Knoeferle, Pia; Carminati, Maria Nella; Abashidze, Dato; Essig, Kai

    2011-01-01

    Eye-tracking findings suggest people prefer to ground their spoken language comprehension by focusing on recently seen events more than anticipating future events: When the verb in NP1-VERB-ADV-NP2 sentences was referentially ambiguous between a recently depicted and an equally plausible future clipart action, listeners fixated the target of the recent action more often at the verb than the object that hadn’t yet been acted upon. We examined whether this inspection preference generalizes to real-world events, and whether it is (vs. isn’t) modulated by how often people see recent and future events acted out. In a first eye-tracking study, the experimenter performed an action (e.g., sugaring pancakes), and then a spoken sentence either referred to that action or to an equally plausible future action (e.g., sugaring strawberries). At the verb, people more often inspected the pancakes (the recent target) than the strawberries (the future target), thus replicating the recent-event preference with these real-world actions. Adverb tense, indicating a future versus past event, had no effect on participants’ visual attention. In a second study we increased the frequency of future actions such that participants saw 50/50 future and recent actions. During the verb people mostly inspected the recent action target, but subsequently they began to rely on tense, and anticipated the future target more often for future than past tense adverbs. A corpus study showed that the verbs and adverbs indicating past versus future actions were equally frequent, suggesting long-term frequency biases did not cause the recent-event preference. Thus, (a) recent real-world actions can rapidly influence comprehension (as indexed by eye gaze to objects), and (b) people prefer to first inspect a recent action target (vs. an object that will soon be acted upon), even when past and future actions occur with equal frequency. A simple frequency-of-experience account cannot accommodate these findings. PMID:22207858

  6. Mechanical Serial-Sectioning Data Assistant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulter, Gregory A.; Madison, Jonathan D.

    Mechanical Serial-Sectioning Data Assistant (MECH-SSDA) is a real-time data analytics software with graphical user-interface that; 1) tracks and visualizes material removal rates for mechanical serial-sectioning experiments using at least two height measurement methods; 2) tracks process time for specific segments of the serial-sectioning experiment; and 3) alerts the user to anomalies in expected removal rate, process time or unanticipated operational pauses

  7. Usefulness of Guided Breathing for Dose Rate-Regulated Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han-Oh, Sarah; Department of Radiation Oncology, University of Maryland Medical System, Baltimore, MD; Yi, Byong Yong

    2009-02-01

    Purpose: To evaluate the usefulness of guided breathing for dose rate-regulated tracking (DRRT), a new technique to compensate for intrafraction tumor motion. Methods and Materials: DRRT uses a preprogrammed multileaf collimator sequence that tracks the tumor motion derived from four-dimensional computed tomography and the corresponding breathing signals measured before treatment. Because the multileaf collimator speed can be controlled by adjusting the dose rate, the multileaf collimator positions are adjusted in real time during treatment by dose rate regulation, thereby maintaining synchrony with the tumor motion. DRRT treatment was simulated with free, audio-guided, and audiovisual-guided breathing signals acquired from 23 lungmore » cancer patients. The tracking error and duty cycle for each patient were determined as a function of the system time delay (range, 0-1.0 s). Results: The tracking error and duty cycle averaged for all 23 patients was 1.9 {+-} 0.8 mm and 92% {+-} 5%, 1.9 {+-} 1.0 mm and 93% {+-} 6%, and 1.8 {+-} 0.7 mm and 92% {+-} 6% for the free, audio-guided, and audiovisual-guided breathing, respectively, for a time delay of 0.35 s. The small differences in both the tracking error and the duty cycle with guided breathing were not statistically significant. Conclusion: DRRT by its nature adapts well to variations in breathing frequency, which is also the motivation for guided-breathing techniques. Because of this redundancy, guided breathing does not result in significant improvements for either the tracking error or the duty cycle when DRRT is used for real-time tumor tracking.« less

  8. Robust tracking of a virtual electrode on a coronary sinus catheter for atrial fibrillation ablation procedures

    NASA Astrophysics Data System (ADS)

    Wu, Wen; Chen, Terrence; Strobel, Norbert; Comaniciu, Dorin

    2012-02-01

    Catheter tracking in X-ray fluoroscopic images has become more important in interventional applications for atrial fibrillation (AF) ablation procedures. It provides real-time guidance for the physicians and can be used as reference for motion compensation applications. In this paper, we propose a novel approach to track a virtual electrode (VE), which is a non-existing electrode on the coronary sinus (CS) catheter at a more proximal location than any real electrodes. Successful tracking of the VE can provide more accurate motion information than tracking of real electrodes. To achieve VE tracking, we first model the CS catheter as a set of electrodes which are detected by our previously published learning-based approach.1 The tracked electrodes are then used to generate the hypotheses for tracking the VE. Model-based hypotheses are fused and evaluated by a Bayesian framework. Evaluation has been conducted on a database of clinical AF ablation data including challenging scenarios such as low signal-to-noise ratio (SNR), occlusion and nonrigid deformation. Our approach obtains 0.54mm median error and 90% of evaluated data have errors less than 1.67mm. The speed of our tracking algorithm reaches 6 frames-per-second on most data. Our study on motion compensation shows that using the VE as reference provides a good point to detect non-physiological catheter motion during the AF ablation procedures.2

  9. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  10. Developing a particle tracking surrogate model to improve inversion of ground water - Surface water models

    NASA Astrophysics Data System (ADS)

    Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain

    2018-03-01

    The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.

  11. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.

    2008-12-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time tracking of implanted markers using hybrid MV-kV imaging is achievable and the technique should be useful to improve the beam targeting accuracy of arc therapy.

  12. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    PubMed

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time tracking of implanted markers using hybrid MV-kV imaging is achievable and the technique should be useful to improve the beam targeting accuracy of arc therapy.

  13. Endoscopic feature tracking for augmented-reality assisted prosthesis selection in mitral valve repair

    NASA Astrophysics Data System (ADS)

    Engelhardt, Sandy; Kolb, Silvio; De Simone, Raffaele; Karck, Matthias; Meinzer, Hans-Peter; Wolf, Ivo

    2016-03-01

    Mitral valve annuloplasty describes a surgical procedure where an artificial prosthesis is sutured onto the anatomical structure of the mitral annulus to re-establish the valve's functionality. Choosing an appropriate commercially available ring size and shape is a difficult decision the surgeon has to make intraoperatively according to his experience. In our augmented-reality framework, digitalized ring models are superimposed onto endoscopic image streams without using any additional hardware. To place the ring model on the proper position within the endoscopic image plane, a pose estimation is performed that depends on the localization of sutures placed by the surgeon around the leaflet origins and punctured through the stiffer structure of the annulus. In this work, the tissue penetration points are tracked by the real-time capable Lucas Kanade optical flow algorithm. The accuracy and robustness of this tracking algorithm is investigated with respect to the question whether outliers influence the subsequent pose estimation. Our results suggest that optical flow is very stable for a variety of different endoscopic scenes and tracking errors do not affect the position of the superimposed virtual objects in the scene, making this approach a viable candidate for annuloplasty augmented reality-enhanced decision support.

  14. Absolute vs. relative error characterization of electromagnetic tracking accuracy

    NASA Astrophysics Data System (ADS)

    Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet

    2010-02-01

    Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of localization errors are clustered and dynamically displayed as separate confidence zones within the operating region of the EM tracker space.

  15. Tracking target objects orbiting earth using satellite-based telescopes

    DOEpatents

    De Vries, Willem H; Olivier, Scot S; Pertica, Alexander J

    2014-10-14

    A system for tracking objects that are in earth orbit via a constellation or network of satellites having imaging devices is provided. An object tracking system includes a ground controller and, for each satellite in the constellation, an onboard controller. The ground controller receives ephemeris information for a target object and directs that ephemeris information be transmitted to the satellites. Each onboard controller receives ephemeris information for a target object, collects images of the target object based on the expected location of the target object at an expected time, identifies actual locations of the target object from the collected images, and identifies a next expected location at a next expected time based on the identified actual locations of the target object. The onboard controller processes the collected image to identify the actual location of the target object and transmits the actual location information to the ground controller.

  16. Real Time Computer Graphics From Body Motion

    NASA Astrophysics Data System (ADS)

    Fisher, Scott; Marion, Ann

    1983-10-01

    This paper focuses on the recent emergence and development of real, time, computer-aided body tracking technologies and their use in combination with various computer graphics imaging techniques. The convergence of these, technologies in our research results, in an interactive display environment. in which multipde, representations of a given body motion can be displayed in real time. Specific reference, to entertainment applications is described in the development of a real time, interactive stage set in which dancers can 'draw' with their bodies as they move, through the space. of the stage or manipulate virtual elements of the set with their gestures.

  17. Long Term Safety Area Tracking (LT-SAT) with online failure detection and recovery for robotic minimally invasive surgery.

    PubMed

    Penza, Veronica; Du, Xiaofei; Stoyanov, Danail; Forgione, Antonello; Mattos, Leonardo S; De Momi, Elena

    2018-04-01

    Despite the benefits introduced by robotic systems in abdominal Minimally Invasive Surgery (MIS), major complications can still affect the outcome of the procedure, such as intra-operative bleeding. One of the causes is attributed to accidental damages to arteries or veins by the surgical tools, and some of the possible risk factors are related to the lack of sub-surface visibilty. Assistive tools guiding the surgical gestures to prevent these kind of injuries would represent a relevant step towards safer clinical procedures. However, it is still challenging to develop computer vision systems able to fulfill the main requirements: (i) long term robustness, (ii) adaptation to environment/object variation and (iii) real time processing. The purpose of this paper is to develop computer vision algorithms to robustly track soft tissue areas (Safety Area, SA), defined intra-operatively by the surgeon based on the real-time endoscopic images, or registered from a pre-operative surgical plan. We propose a framework to combine an optical flow algorithm with a tracking-by-detection approach in order to be robust against failures caused by: (i) partial occlusion, (ii) total occlusion, (iii) SA out of the field of view, (iv) deformation, (v) illumination changes, (vi) abrupt camera motion, (vii), blur and (viii) smoke. A Bayesian inference-based approach is used to detect the failure of the tracker, based on online context information. A Model Update Strategy (MUpS) is also proposed to improve the SA re-detection after failures, taking into account the changes of appearance of the SA model due to contact with instruments or image noise. The performance of the algorithm was assessed on two datasets, representing ex-vivo organs and in-vivo surgical scenarios. Results show that the proposed framework, enhanced with MUpS, is capable of maintain high tracking performance for extended periods of time ( ≃ 4 min - containing the aforementioned events) with high precision (0.7) and recall (0.8) values, and with a recovery time after a failure between 1 and 8 frames in the worst case. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Statistical Models for Predicting Threat Detection From Human Behavior.

    PubMed

    Kelley, Timothy; Amon, Mary J; Bertenthal, Bennett I

    2018-01-01

    Users must regularly distinguish between secure and insecure cyber platforms in order to preserve their privacy and safety. Mouse tracking is an accessible, high-resolution measure that can be leveraged to understand the dynamics of perception, categorization, and decision-making in threat detection. Researchers have begun to utilize measures like mouse tracking in cyber security research, including in the study of risky online behavior. However, it remains an empirical question to what extent real-time information about user behavior is predictive of user outcomes and demonstrates added value compared to traditional self-report questionnaires. Participants navigated through six simulated websites, which resembled either secure "non-spoof" or insecure "spoof" versions of popular websites. Websites also varied in terms of authentication level (i.e., extended validation, standard validation, or partial encryption). Spoof websites had modified Uniform Resource Locator (URL) and authentication level. Participants chose to "login" to or "back" out of each website based on perceived website security. Mouse tracking information was recorded throughout the task, along with task performance. After completing the website identification task, participants completed a questionnaire assessing their security knowledge and degree of familiarity with the websites simulated during the experiment. Despite being primed to the possibility of website phishing attacks, participants generally showed a bias for logging in to websites versus backing out of potentially dangerous sites. Along these lines, participant ability to identify spoof websites was around the level of chance. Hierarchical Bayesian logistic models were used to compare the accuracy of two-factor (i.e., website security and encryption level), survey-based (i.e., security knowledge and website familiarity), and real-time measures (i.e., mouse tracking) in predicting risky online behavior during phishing attacks. Participant accuracy in identifying spoof and non-spoof websites was best captured using a model that included real-time indicators of decision-making behavior, as compared to two-factor and survey-based models. Findings validate three widely applicable measures of user behavior derived from mouse tracking recordings, which can be utilized in cyber security and user intervention research. Survey data alone are not as strong at predicting risky Internet behavior as models that incorporate real-time measures of user behavior, such as mouse tracking.

  19. OSSOS: X. How to use a Survey Simulator: Statistical Testing of Dynamical Models Against the Real Kuiper Belt

    NASA Astrophysics Data System (ADS)

    Lawler, Samantha M.; Kavelaars, J. J.; Alexandersen, Mike; Bannister, Michele T.; Gladman, Brett; Petit, Jean-Marc; Shankman, Cory

    2018-05-01

    All surveys include observational biases, which makes it impossible to directly compare properties of discovered trans-Neptunian Objects (TNOs) with dynamical models. However, by carefully keeping track of survey pointings on the sky, detection limits, tracking fractions, and rate cuts, the biases from a survey can be modelled in Survey Simulator software. A Survey Simulator takes an intrinsic orbital model (from, for example, the output of a dynamical Kuiper belt emplacement simulation) and applies the survey biases, so that the biased simulated objects can be directly compared with real discoveries. This methodology has been used with great success in the Outer Solar System Origins Survey (OSSOS) and its predecessor surveys. In this chapter, we give four examples of ways to use the OSSOS Survey Simulator to gain knowledge about the true structure of the Kuiper Belt. We demonstrate how to statistically compare different dynamical model outputs with real TNO discoveries, how to quantify detection biases within a TNO population, how to measure intrinsic population sizes, and how to use upper limits from non-detections. We hope this will provide a framework for dynamical modellers to statistically test the validity of their models.

  20. Three-dimensional modeling, estimation, and fault diagnosis of spacecraft air contaminants.

    PubMed

    Narayan, A P; Ramirez, W F

    1998-01-01

    A description is given of the design and implementation of a method to track the presence of air contaminants aboard a spacecraft using an accurate physical model and of a procedure that would raise alarms when certain tolerance levels are exceeded. Because our objective is to monitor the contaminants in real time, we make use of a state estimation procedure that filters measurements from a sensor system and arrives at an optimal estimate of the state of the system. The model essentially consists of a convection-diffusion equation in three dimensions, solved implicitly using the principle of operator splitting, and uses a flowfield obtained by the solution of the Navier-Stokes equations for the cabin geometry, assuming steady-state conditions. A novel implicit Kalman filter has been used for fault detection, a procedure that is an efficient way to track the state of the system and that uses the sparse nature of the state transition matrices.

Top