Sample records for robust visual tracking

  1. Self-paced model learning for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin

    2017-01-01

    In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.

  2. Robust visual tracking via multiscale deep sparse networks

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  3. LEA Detection and Tracking Method for Color-Independent Visual-MIMO

    PubMed Central

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-01-01

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. PMID:27384563

  4. LEA Detection and Tracking Method for Color-Independent Visual-MIMO.

    PubMed

    Kim, Jai-Eun; Kim, Ji-Won; Kim, Ki-Doo

    2016-07-02

    Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement.

  5. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  6. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  7. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  8. Multiple feature fusion via covariance matrix for visual tracking

    NASA Astrophysics Data System (ADS)

    Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui

    2018-04-01

    Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.

  9. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  10. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  11. Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters

    PubMed Central

    Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun

    2017-01-01

    Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved. PMID:28241475

  12. Effective Visual Tracking Using Multi-Block and Scale Space Based on Kernelized Correlation Filters.

    PubMed

    Jeong, Soowoong; Kim, Guisik; Lee, Sangkeun

    2017-02-23

    Accurate scale estimation and occlusion handling is a challenging problem in visual tracking. Recently, correlation filter-based trackers have shown impressive results in terms of accuracy, robustness, and speed. However, the model is not robust to scale variation and occlusion. In this paper, we address the problems associated with scale variation and occlusion by employing a scale space filter and multi-block scheme based on a kernelized correlation filter (KCF) tracker. Furthermore, we develop a more robust algorithm using an appearance update model that approximates the change of state of occlusion and deformation. In particular, an adaptive update scheme is presented to make each process robust. The experimental results demonstrate that the proposed method outperformed 29 state-of-the-art trackers on 100 challenging sequences. Specifically, the results obtained with the proposed scheme were improved by 8% and 18% compared to those of the KCF tracker for 49 occlusion and 64 scale variation sequences, respectively. Therefore, the proposed tracker can be a robust and useful tool for object tracking when occlusion and scale variation are involved.

  13. Robust visual tracking based on deep convolutional neural networks and kernelized correlation filters

    NASA Astrophysics Data System (ADS)

    Yang, Hua; Zhong, Donghong; Liu, Chenyi; Song, Kaiyou; Yin, Zhouping

    2018-03-01

    Object tracking is still a challenging problem in computer vision, as it entails learning an effective model to account for appearance changes caused by occlusion, out of view, plane rotation, scale change, and background clutter. This paper proposes a robust visual tracking algorithm called deep convolutional neural network (DCNNCT) to simultaneously address these challenges. The proposed DCNNCT algorithm utilizes a DCNN to extract the image feature of a tracked target, and the full range of information regarding each convolutional layer is used to express the image feature. Subsequently, the kernelized correlation filters (CF) in each convolutional layer are adaptively learned, the correlation response maps of that are combined to estimate the location of the tracked target. To avoid the case of tracking failure, an online random ferns classifier is employed to redetect the tracked target, and a dual-threshold scheme is used to obtain the final target location by comparing the tracking result with the detection result. Finally, the change in scale of the target is determined by building scale pyramids and training a CF. Extensive experiments demonstrate that the proposed algorithm is effective at tracking, especially when evaluated using an index called the overlap rate. The DCNNCT algorithm is also highly competitive in terms of robustness with respect to state-of-the-art trackers in various challenging scenarios.

  14. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    PubMed Central

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-01-01

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769

  15. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  16. A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.

    PubMed

    Mung, Jay; Vignon, Francois; Jain, Ameet

    2011-01-01

    In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact.

  17. Adaptive low-rank subspace learning with online optimization for robust visual tracking.

    PubMed

    Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan

    2017-04-01

    In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions

    NASA Astrophysics Data System (ADS)

    Huang, Zhi; Fan, Baozheng; Song, Xiaolin

    2018-03-01

    As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.

  19. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  20. The research and application of visual saliency and adaptive support vector machine in target tracking field.

    PubMed

    Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.

  1. Visual tracking using objectness-bounding box regression and correlation filters

    NASA Astrophysics Data System (ADS)

    Mbelwa, Jimmy T.; Zhao, Qingjie; Lu, Yao; Wang, Fasheng; Mbise, Mercy

    2018-03-01

    Visual tracking is a fundamental problem in computer vision with extensive application domains in surveillance and intelligent systems. Recently, correlation filter-based tracking methods have shown a great achievement in terms of robustness, accuracy, and speed. However, such methods have a problem of dealing with fast motion (FM), motion blur (MB), illumination variation (IV), and drifting caused by occlusion (OCC). To solve this problem, a tracking method that integrates objectness-bounding box regression (O-BBR) model and a scheme based on kernelized correlation filter (KCF) is proposed. The scheme based on KCF is used to improve the tracking performance of FM and MB. For handling drift problem caused by OCC and IV, we propose objectness proposals trained in bounding box regression as prior knowledge to provide candidates and background suppression. Finally, scheme KCF as a base tracker and O-BBR are fused to obtain a state of a target object. Extensive experimental comparisons of the developed tracking method with other state-of-the-art trackers are performed on some of the challenging video sequences. Experimental comparison results show that our proposed tracking method outperforms other state-of-the-art tracking methods in terms of effectiveness, accuracy, and robustness.

  2. Visual tracking of da Vinci instruments for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2014-03-01

    Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.

  3. Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking

    PubMed Central

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715

  4. Online multi-modal robust non-negative dictionary learning for visual tracking.

    PubMed

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  5. Development of robust behaviour recognition for an at-home biomonitoring robot with assistance of subject localization and enhanced visual tracking.

    PubMed

    Imamoglu, Nevrez; Dorronzoro, Enrique; Wei, Zhixuan; Shi, Huangjun; Sekine, Masashi; González, José; Gu, Dongyun; Chen, Weidong; Yu, Wenwei

    2014-01-01

    Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy.

  6. Development of Robust Behaviour Recognition for an at-Home Biomonitoring Robot with Assistance of Subject Localization and Enhanced Visual Tracking

    PubMed Central

    Imamoglu, Nevrez; Dorronzoro, Enrique; Wei, Zhixuan; Shi, Huangjun; González, José; Gu, Dongyun; Yu, Wenwei

    2014-01-01

    Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy. PMID:25587560

  7. Visual Tracking Based on Extreme Learning Machine and Sparse Representation

    PubMed Central

    Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen

    2015-01-01

    The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359

  8. A visual tracking method based on deep learning without online model updating

    NASA Astrophysics Data System (ADS)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  9. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  10. Temporal Restricted Visual Tracking Via Reverse-Low-Rank Sparse Learning.

    PubMed

    Yang, Yehui; Hu, Wenrui; Xie, Yuan; Zhang, Wensheng; Zhang, Tianzhu

    2017-02-01

    An effective representation model, which aims to mine the most meaningful information in the data, plays an important role in visual tracking. Some recent particle-filter-based trackers achieve promising results by introducing the low-rank assumption into the representation model. However, their assumed low-rank structure of candidates limits the robustness when facing severe challenges such as abrupt motion. To avoid the above limitation, we propose a temporal restricted reverse-low-rank learning algorithm for visual tracking with the following advantages: 1) the reverse-low-rank model jointly represents target and background templates via candidates, which exploits the low-rank structure among consecutive target observations and enforces the temporal consistency of target in a global level; 2) the appearance consistency may be broken when target suffers from sudden changes. To overcome this issue, we propose a local constraint via l 1,2 mixed-norm, which can not only ensures the local consistency of target appearance, but also tolerates the sudden changes between two adjacent frames; and 3) to alleviate the inference of unreasonable representation values due to outlier candidates, an adaptive weighted scheme is designed to improve the robustness of the tracker. By evaluating on 26 challenge video sequences, the experiments show the effectiveness and favorable performance of the proposed algorithm against 12 state-of-the-art visual trackers.

  11. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  12. Evaluating a robust contour tracker on echocardiographic sequences.

    PubMed

    Jacob, G; Noble, J A; Mulet-Parada, M; Blake, A

    1999-03-01

    In this paper we present an evaluation of a robust visual image tracker on echocardiographic image sequences. We show how the tracking framework can be customized to define an appropriate shape space that describes heart shape deformations that can be learnt from a training data set. We also investigate energy-based temporal boundary enhancement methods to improve image feature measurement. Results are presented demonstrating real-time tracking on real normal heart motion data sequences and abnormal synthesized and real heart motion data sequences. We conclude by discussing some of our current research efforts.

  13. Figure–ground discrimination behavior in Drosophila. I. Spatial organization of wing-steering responses

    PubMed Central

    Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.

    2014-01-01

    The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267

  14. Discriminative object tracking via sparse representation and online dictionary learning.

    PubMed

    Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua

    2014-04-01

    We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.

  15. Deterministic object tracking using Gaussian ringlet and directional edge features

    NASA Astrophysics Data System (ADS)

    Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.

    2017-10-01

    Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.

  16. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  17. Scale-adaptive compressive tracking with feature integration

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin

    2016-05-01

    Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.

  18. Human Mobility Monitoring in Very Low Resolution Visual Sensor Network

    PubMed Central

    Bo Bo, Nyan; Deboeverie, Francis; Eldib, Mohamed; Guan, Junzhi; Xie, Xingzhe; Niño, Jorge; Van Haerenborgh, Dirk; Slembrouck, Maarten; Van de Velde, Samuel; Steendam, Heidi; Veelaert, Peter; Kleihorst, Richard; Aghajan, Hamid; Philips, Wilfried

    2014-01-01

    This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 × 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics. PMID:25375754

  19. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  20. Adaptive particle filter for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai

    2009-10-01

    Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.

  1. Optimal Appearance Model for Visual Tracking

    PubMed Central

    Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao

    2016-01-01

    Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639

  2. Robust feature tracking for endoscopic pose estimation and structure recovery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Krappe, S.; Röhl, S.; Bodenstedt, S.; Müller-Stich, B.; Dillmann, R.

    2013-03-01

    Minimally invasive surgery is a highly complex medical discipline with several difficulties for the surgeon. To alleviate these difficulties, augmented reality can be used for intraoperative assistance. For visualization, the endoscope pose must be known which can be acquired with a SLAM (Simultaneous Localization and Mapping) approach using the endoscopic images. In this paper we focus on feature tracking for SLAM in minimally invasive surgery. Robust feature tracking and minimization of false correspondences is crucial for localizing the endoscope. As sensory input we use a stereo endoscope and evaluate different feature types in a developed SLAM framework. The accuracy of the endoscope pose estimation is validated with synthetic and ex vivo data. Furthermore we test the approach with in vivo image sequences from da Vinci interventions.

  3. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    PubMed

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  4. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.

    PubMed

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-09-07

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.

  5. Good Features to Correlate for Visual Tracking

    NASA Astrophysics Data System (ADS)

    Gundogdu, Erhan; Alatan, A. Aydin

    2018-05-01

    During the recent years, correlation filters have shown dominant and spectacular results for visual object tracking. The types of the features that are employed in these family of trackers significantly affect the performance of visual tracking. The ultimate goal is to utilize robust features invariant to any kind of appearance change of the object, while predicting the object location as properly as in the case of no appearance change. As the deep learning based methods have emerged, the study of learning features for specific tasks has accelerated. For instance, discriminative visual tracking methods based on deep architectures have been studied with promising performance. Nevertheless, correlation filter based (CFB) trackers confine themselves to use the pre-trained networks which are trained for object classification problem. To this end, in this manuscript the problem of learning deep fully convolutional features for the CFB visual tracking is formulated. In order to learn the proposed model, a novel and efficient backpropagation algorithm is presented based on the loss function of the network. The proposed learning framework enables the network model to be flexible for a custom design. Moreover, it alleviates the dependency on the network trained for classification. Extensive performance analysis shows the efficacy of the proposed custom design in the CFB tracking framework. By fine-tuning the convolutional parts of a state-of-the-art network and integrating this model to a CFB tracker, which is the top performing one of VOT2016, 18% increase is achieved in terms of expected average overlap, and tracking failures are decreased by 25%, while maintaining the superiority over the state-of-the-art methods in OTB-2013 and OTB-2015 tracking datasets.

  6. Accurate mask-based spatially regularized correlation filter for visual tracking

    NASA Astrophysics Data System (ADS)

    Gu, Xiaodong; Xu, Xinping

    2017-01-01

    Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.

  7. Is there an age-related positivity effect in visual attention? A comparison of two methodologies.

    PubMed

    Isaacowitz, Derek M; Wadlinger, Heather A; Goren, Deborah; Wilson, Hugh R

    2006-08-01

    Research suggests a positivity effect in older adults' memory for emotional material, but the evidence from the attentional domain is mixed. The present study combined 2 methodologies for studying preferences in visual attention, eye tracking, and dot-probe, as younger and older adults viewed synthetic emotional faces. Eye tracking most consistently revealed a positivity effect in older adults' attention, so that older adults showed preferential looking toward happy faces and away from sad faces. Dot-probe results were less robust, but in the same direction. Methodological and theoretical implications for the study of socioemotional aging are discussed. (c) 2006 APA, all rights reserved

  8. Interactive projection for aerial dance using depth sensing camera

    NASA Astrophysics Data System (ADS)

    Dubnov, Tammuz; Seldess, Zachary; Dubnov, Shlomo

    2014-02-01

    This paper describes an interactive performance system for oor and Aerial Dance that controls visual and sonic aspects of the presentation via a depth sensing camera (MS Kinect). In order to detect, measure and track free movement in space, 3 degree of freedom (3-DOF) tracking in space (on the ground and in the air) is performed using IR markers. Gesture tracking and recognition is performed using a simpli ed HMM model that allows robust mapping of the actor's actions to graphics and sound. Additional visual e ects are achieved by segmentation of the actor body based on depth information, allowing projection of separate imagery on the performer and the backdrop. Artistic use of augmented reality performance relative to more traditional concepts of stage design and dramaturgy are discussed.

  9. Tracking the evolution of crossmodal plasticity and visual functions before and after sight restoration

    PubMed Central

    Dormal, Giulia; Lepore, Franco; Harissi-Dagher, Mona; Albouy, Geneviève; Bertone, Armando; Rossion, Bruno

    2014-01-01

    Visual deprivation leads to massive reorganization in both the structure and function of the occipital cortex, raising crucial challenges for sight restoration. We tracked the behavioral, structural, and neurofunctional changes occurring in an early and severely visually impaired patient before and 1.5 and 7 mo after sight restoration with magnetic resonance imaging. Robust presurgical auditory responses were found in occipital cortex despite residual preoperative vision. In primary visual cortex, crossmodal auditory responses overlapped with visual responses and remained elevated even 7 mo after surgery. However, these crossmodal responses decreased in extrastriate occipital regions after surgery, together with improved behavioral vision and with increases in both gray matter density and neural activation in low-level visual regions. Selective responses in high-level visual regions involved in motion and face processing were observable even before surgery and did not evolve after surgery. Taken together, these findings demonstrate that structural and functional reorganization of occipital regions are present in an individual with a long-standing history of severe visual impairment and that such reorganizations can be partially reversed by visual restoration in adulthood. PMID:25520432

  10. A Method for the Direct Identification of Differentiating Muscle Cells by a Fluorescent Mitochondrial Dye

    PubMed Central

    Miyake, Tetsuaki; McDermott, John C.; Gramolini, Anthony O.

    2011-01-01

    Identification of differentiating muscle cells generally requires fixation, antibodies directed against muscle specific proteins, and lengthy staining processes or, alternatively, transfection of muscle specific reporter genes driving GFP expression. In this study, we examined the possibility of using the robust mitochondrial network seen in maturing muscle cells as a marker of cellular differentiation. The mitochondrial fluorescent tracking dye, MitoTracker, which is a cell-permeable, low toxicity, fluorescent dye, allowed us to distinguish and track living differentiating muscle cells visually by epi-fluorescence microscopy. MitoTracker staining provides a robust and simple detection strategy for living differentiating cells in culture without the need for fixation or biochemical processing. PMID:22174849

  11. Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.

    PubMed

    Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie

    2017-09-12

    In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.

  12. Robust multiperson tracking from a mobile platform.

    PubMed

    Ess, Andreas; Leibe, Bastian; Schindler, Konrad; van Gool, Luc

    2009-10-01

    In this paper, we address the problem of multiperson tracking in busy pedestrian zones using a stereo rig mounted on a mobile platform. The complexity of the problem calls for an integrated solution that extracts as much visual information as possible and combines it through cognitive feedback cycles. We propose such an approach, which jointly estimates camera position, stereo depth, object detection, and tracking. The interplay between those components is represented by a graphical model. Since the model has to incorporate object-object interactions and temporal links to past frames, direct inference is intractable. We, therefore, propose a two-stage procedure: for each frame, we first solve a simplified version of the model (disregarding interactions and temporal continuity) to estimate the scene geometry and an overcomplete set of object detections. Conditioned on these results, we then address object interactions, tracking, and prediction in a second step. The approach is experimentally evaluated on several long and difficult video sequences from busy inner-city locations. Our results show that the proposed integration makes it possible to deliver robust tracking performance in scenes of realistic complexity.

  13. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters

    PubMed Central

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-01-01

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046

  14. Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters

    PubMed Central

    Zhang, Sirou; Qiao, Xiaoya

    2017-01-01

    In recent years, visual object tracking has been widely used in military guidance, human-computer interaction, road traffic, scene monitoring and many other fields. The tracking algorithms based on correlation filters have shown good performance in terms of accuracy and tracking speed. However, their performance is not satisfactory in scenes with scale variation, deformation, and occlusion. In this paper, we propose a scene-aware adaptive updating mechanism for visual tracking via a kernel correlation filter (KCF). First, a low complexity scale estimation method is presented, in which the corresponding weight in five scales is employed to determine the final target scale. Then, the adaptive updating mechanism is presented based on the scene-classification. We classify the video scenes as four categories by video content analysis. According to the target scene, we exploit the adaptive updating mechanism to update the kernel correlation filter to improve the robustness of the tracker, especially in scenes with scale variation, deformation, and occlusion. We evaluate our tracker on the CVPR2013 benchmark. The experimental results obtained with the proposed algorithm are improved by 33.3%, 15%, 6%, 21.9% and 19.8% compared to those of the KCF tracker on the scene with scale variation, partial or long-time large-area occlusion, deformation, fast motion and out-of-view. PMID:29140311

  15. Game theory-based visual tracking approach focusing on color and texture features.

    PubMed

    Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Chen, Chuanhua; Wang, Xin

    2017-07-20

    It is difficult for a single-feature tracking algorithm to achieve strong robustness under a complex environment. To solve this problem, we proposed a multifeature fusion tracking algorithm that is based on game theory. By focusing on color and texture features as two gamers, this algorithm accomplishes tracking by using a mean shift iterative formula to search for the Nash equilibrium of the game. The contribution of different features is always keeping the state of optical balance, so that the algorithm can fully take advantage of feature fusion. According to the experiment results, this algorithm proves to possess good performance, especially under the condition of scene variation, target occlusion, and similar interference.

  16. A μ analysis-based, controller-synthesis framework for robust bioinspired visual navigation in less-structured environments.

    PubMed

    Keshavan, J; Gremillion, G; Escobar-Alvarez, H; Humbert, J S

    2014-06-01

    Safe, autonomous navigation by aerial microsystems in less-structured environments is a difficult challenge to overcome with current technology. This paper presents a novel visual-navigation approach that combines bioinspired wide-field processing of optic flow information with control-theoretic tools for synthesis of closed loop systems, resulting in robustness and performance guarantees. Structured singular value analysis is used to synthesize a dynamic controller that provides good tracking performance in uncertain environments without resorting to explicit pose estimation or extraction of a detailed environmental depth map. Experimental results with a quadrotor demonstrate the vehicle's robust obstacle-avoidance behaviour in a straight line corridor, an S-shaped corridor and a corridor with obstacles distributed in the vehicle's path. The computational efficiency and simplicity of the current approach offers a promising alternative to satisfying the payload, power and bandwidth constraints imposed by aerial microsystems.

  17. Siamese convolutional networks for tracking the spine motion

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong

    2017-09-01

    Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.

  18. Nonlinear dynamic model for visual object tracking on Grassmann manifolds with partial occlusion handling.

    PubMed

    Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua

    2013-12-01

    This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.

  19. Application of a novel particle tracking algorithm in the flow visualization of an artificial abdominal aortic aneurysm.

    PubMed

    Zhang, Yang; Wang, Yuan; He, Wenbo; Yang, Bin

    2014-01-01

    A novel Particle Tracking Velocimetry (PTV) algorithm based on Voronoi Diagram (VD) is proposed and briefed as VD-PTV. The robustness of VD-PTV for pulsatile flow is verified through a test that includes a widely used artificial flow and a classic reference algorithm. The proposed algorithm is then applied to visualize the flow in an artificial abdominal aortic aneurysm included in a pulsatile circulation system that simulates the aortic blood flow in human body. Results show that, large particles tend to gather at the upstream boundary because of the backflow eddies that follow the pulsation. This qualitative description, together with VD-PTV, has laid a foundation for future works that demand high-level quantification.

  20. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    PubMed

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. OpenCV and TYZX : video surveillance for tracking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jim; Spencer, Andrew; Chu, Eric

    2008-08-01

    As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processingmore » solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.« less

  2. Extending Correlation Filter-Based Visual Tracking by Tree-Structured Ensemble and Spatial Windowing.

    PubMed

    Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin

    2017-11-01

    Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.

  3. Visual control of flight speed in Drosophila melanogaster.

    PubMed

    Fry, Steven N; Rohrseitz, Nicola; Straw, Andrew D; Dickinson, Michael H

    2009-04-01

    Flight control in insects depends on self-induced image motion (optic flow), which the visual system must process to generate appropriate corrective steering maneuvers. Classic experiments in tethered insects applied rigorous system identification techniques for the analysis of turning reactions in the presence of rotating pattern stimuli delivered in open-loop. However, the functional relevance of these measurements for visual free-flight control remains equivocal due to the largely unknown effects of the highly constrained experimental conditions. To perform a systems analysis of the visual flight speed response under free-flight conditions, we implemented a 'one-parameter open-loop' paradigm using 'TrackFly' in a wind tunnel equipped with real-time tracking and virtual reality display technology. Upwind flying flies were stimulated with sine gratings of varying temporal and spatial frequencies, and the resulting speed responses were measured from the resulting flight speed reactions. To control flight speed, the visual system of the fruit fly extracts linear pattern velocity robustly over a broad range of spatio-temporal frequencies. The speed signal is used for a proportional control of flight speed within locomotor limits. The extraction of pattern velocity over a broad spatio-temporal frequency range may require more sophisticated motion processing mechanisms than those identified in flies so far. In Drosophila, the neuromotor pathways underlying flight speed control may be suitably explored by applying advanced genetic techniques, for which our data can serve as a baseline. Finally, the high-level control principles identified in the fly can be meaningfully transferred into a robotic context, such as for the robust and efficient control of autonomous flying micro air vehicles.

  4. Shape and texture fused recognition of flying targets

    NASA Astrophysics Data System (ADS)

    Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás

    2011-06-01

    This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).

  5. Visual servoing for a US-guided therapeutic HIFU system by coagulated lesion tracking: a phantom study.

    PubMed

    Seo, Joonho; Koizumi, Norihiro; Funamoto, Takakazu; Sugita, Naohiko; Yoshinaka, Kiyoshi; Nomiya, Akira; Homma, Yukio; Matsumoto, Yoichiro; Mitsuishi, Mamoru

    2011-06-01

    Applying ultrasound (US)-guided high-intensity focused ultrasound (HIFU) therapy for kidney tumours is currently very difficult, due to the unclearly observed tumour area and renal motion induced by human respiration. In this research, we propose new methods by which to track the indistinct tumour area and to compensate the respiratory tumour motion for US-guided HIFU treatment. For tracking indistinct tumour areas, we detect the US speckle change created by HIFU irradiation. In other words, HIFU thermal ablation can coagulate tissue in the tumour area and an intraoperatively created coagulated lesion (CL) is used as a spatial landmark for US visual tracking. Specifically, the condensation algorithm was applied to robust and real-time CL speckle pattern tracking in the sequence of US images. Moreover, biplanar US imaging was used to locate the three-dimensional position of the CL, and a three-actuator system drives the end-effector to compensate for the motion. Finally, we tested the proposed method by using a newly devised phantom model that enables both visual tracking and a thermal response by HIFU irradiation. In the experiment, after generation of the CL in the phantom kidney, the end-effector successfully synchronized with the phantom motion, which was modelled by the captured motion data for the human kidney. The accuracy of the motion compensation was evaluated by the error between the end-effector and the respiratory motion, the RMS error of which was approximately 2 mm. This research shows that a HIFU-induced CL provides a very good landmark for target motion tracking. By using the CL tracking method, target motion compensation can be realized in the US-guided robotic HIFU system. Copyright © 2011 John Wiley & Sons, Ltd.

  6. Audio-visual imposture

    NASA Astrophysics Data System (ADS)

    Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard

    2006-05-01

    A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.

  7. Visual tracking for multi-modality computer-assisted image guidance

    NASA Astrophysics Data System (ADS)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  8. Single-camera visual odometry to track a surgical X-ray C-arm base.

    PubMed

    Esfandiari, Hooman; Lichti, Derek; Anglin, Carolyn

    2017-12-01

    This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).

  9. Visual tracking strategies for intelligent vehicle highway systems

    NASA Astrophysics Data System (ADS)

    Smith, Christopher E.; Papanikolopoulos, Nikolaos P.; Brandt, Scott A.; Richards, Charles

    1995-01-01

    The complexity and congestion of current transportation systems often produce traffic situations that jeopardize the safety of the people involved. These situations vary from maintaining a safe distance behind a leading vehicle to safely allowing a pedestrian to cross a busy street. Environmental sensing plays a critical role in virtually all of these situations. Of the sensors available, vision sensors provide information that is richer and more complete than other sensors, making them a logical choice for a multisensor transportation system. In this paper we present robust techniques for intelligent vehicle-highway applications where computer vision plays a crucial role. In particular, we demonstrate that the controlled active vision framework can be utilized to provide a visual sensing modality to a traffic advisory system in order to increase the overall safety margin in a variety of common traffic situations. We have selected two application examples, vehicle tracking and pedestrian tracking, to demonstrate that the framework can provide precisely the type of information required to effectively manage the given situation.

  10. Conditional Random Field (CRF)-Boosting: Constructing a Robust Online Hybrid Boosting Multiple Object Tracker Facilitated by CRF Learning

    PubMed Central

    Yang, Ehwa; Gwak, Jeonghwan; Jeon, Moongu

    2017-01-01

    Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable. PMID:28304366

  11. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.

  12. Speckle tracking as a method to measure hemidiaphragm excursion.

    PubMed

    Goutman, Stephen A; Hamilton, James D; Swihart, Blake; Foerster, Bradley; Feldman, Eva L; Rubin, Jonathan M

    2017-01-01

    Diaphragm excursion measured via ultrasound may be an important imaging outcome measure of respiratory function. We developed a new method for measuring diaphragm movement and compared it to the more traditional M-mode method. Ultrasound images of the right and left hemidiaphragms were collected to compare speckle tracking and M-mode measurements of diaphragm excursion. Speckle tracking was performed using EchoInsight (Epsilon Imaging, Ann Arbor, Michigan). Six healthy subjects without a history of pulmonary diseases were included in this proof-of-concept study. Speckle tracking of the diaphragm is technically possible. Unlike M-mode, speckle tracking carries the advantage of reliable visualization and measurement of the left hemidiaphragm. Speckle tracking accounted for diaphragm movement simultaneously in the cephalocaudad and mediolateral directions, unlike M-mode, which is 1-dimensional. Diaphragm speckle tracking may represent a novel, more robust method for measuring diaphragm excursion, especially for the left hemidiaphragm. Muscle Nerve 55: 125-127, 2017. © 2016 Wiley Periodicals, Inc.

  13. Accurate motion parameter estimation for colonoscopy tracking using a regression method

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2010-03-01

    Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.

  14. An eye tracking investigation of color-location binding in infants' visual short-term memory.

    PubMed

    Oakes, Lisa M; Baumgartner, Heidi A; Kanjlia, Shipra; Luck, Steven J

    2017-01-01

    Two experiments examined 8- and 10-month-old infants' ( N = 71) binding of object identity (color) and location information in visual short-term memory (VSTM) using a one-shot change detection task . Building on previous work using the simultaneous streams change detection task, we confirmed that 8- and 10-month-old infants are sensitive to changes in binding between identity and location in VSTM. Further, we demonstrated that infants recognize specifically what changed in these events. Thus, infants' VSTM for binding is robust and can be observed in different procedures and with different stimuli.

  15. Real-Time Detection and Tracking of Multiple People in Laser Scan Frames

    NASA Astrophysics Data System (ADS)

    Cui, J.; Song, X.; Zhao, H.; Zha, H.; Shibasaki, R.

    This chapter presents an approach to detect and track multiple people ro bustly in real time using laser scan frames. The detection and tracking of people in real time is a problem that arises in a variety of different contexts. Examples in clude intelligent surveillance for security purposes, scene analysis for service robot, and crowd behavior analysis for human behavior study. Over the last several years, an increasing number of laser-based people-tracking systems have been developed in both mobile robotics platforms and fixed platforms using one or multiple laser scanners. It has been proved that processing on laser scanner data makes the tracker much faster and more robust than a vision-only based one in complex situations. In this chapter, we present a novel robust tracker to detect and track multiple people in a crowded and open area in real time. First, raw data are obtained that measures two legs for each people at a height of 16 cm from horizontal ground with multiple registered laser scanners. A stable feature is extracted using accumulated distribu tion of successive laser frames. In this way, the noise that generates split and merged measurements is smoothed well, and the pattern of rhythmic swinging legs is uti lized to extract each leg. Second, a probabilistic tracking model is presented, and then a sequential inference process using a Bayesian rule is described. A sequential inference process is difficult to compute analytically, so two strategies are presented to simplify the computation. In the case of independent tracking, the Kalman fil ter is used with a more efficient measurement likelihood model based on a region coherency property. Finally, to deal with trajectory fragments we present a concise approach to fuse just a little visual information from synchronized video camera to laser data. Evaluation with real data shows that the proposed method is robust and effective. It achieves a significant improvement compared with existing laser-based trackers.

  16. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  17. Robust tracking control of a magnetically suspended rigid body

    NASA Technical Reports Server (NTRS)

    Lim, Kyong B.; Cox, David E.

    1994-01-01

    This study is an application of H-infinity and micro-synthesis for designing robust tracking controllers for the Large Angle Magnetic Suspension Test Facility. The modeling, design, analysis, simulation, and testing of a control law that guarantees tracking performance under external disturbances and model uncertainties is investigated. The type of uncertainties considered and the tracking performance metric used is discussed. This study demonstrates the tradeoff between tracking performance at low frequencies and robustness at high frequencies. Two sets of controllers were designed and tested. The first set emphasized performance over robustness, while the second set traded off performance for robustness. Comparisons of simulation and test results are also included. Current simulation and experimental results indicate that reasonably good robust tracking performance can be attained for this system using multivariable robust control approach.

  18. Design and implementation of a PC-based image-guided surgical system.

    PubMed

    Stefansic, James D; Bass, W Andrew; Hartmann, Steven L; Beasley, Ryan A; Sinha, Tuhin K; Cash, David M; Herline, Alan J; Galloway, Robert L

    2002-11-01

    In interactive, image-guided surgery, current physical space position in the operating room is displayed on various sets of medical images used for surgical navigation. We have developed a PC-based surgical guidance system (ORION) which synchronously displays surgical position on up to four image sets and updates them in real time. There are three essential components which must be developed for this system: (1) accurately tracked instruments; (2) accurate registration techniques to map physical space to image space; and (3) methods to display and update the image sets on a computer monitor. For each of these components, we have developed a set of dynamic link libraries in MS Visual C++ 6.0 supporting various hardware tools and software techniques. Surgical instruments are tracked in physical space using an active optical tracking system. Several of the different registration algorithms were developed with a library of robust math kernel functions, and the accuracy of all registration techniques was thoroughly investigated. Our display was developed using the Win32 API for windows management and tomographic visualization, a frame grabber for live video capture, and OpenGL for visualization of surface renderings. We have begun to use this current implementation of our system for several surgical procedures, including open and minimally invasive liver surgery.

  19. Robust visual tracking using a contextual boosting approach

    NASA Astrophysics Data System (ADS)

    Jiang, Wanyue; Wang, Yin; Wang, Daobo

    2018-03-01

    In recent years, detection-based image trackers have been gaining ground rapidly, thanks to its capacity of incorporating a variety of image features. Nevertheless, its tracking performance might be compromised if background regions are mislabeled as foreground in the training process. To resolve this problem, we propose an online visual tracking algorithm designated to improving the training label accuracy in the learning phase. In the proposed method, superpixels are used as samples, and their ambiguous labels are reassigned in accordance with both prior estimation and contextual information. The location and scale of the target are usually determined by confidence map, which is doomed to shrink since background regions are always incorporated into the bounding box. To address this dilemma, we propose a cross projection scheme via projecting the confidence map for target detecting. Moreover, the performance of the proposed tracker can be further improved by adding rigid-structured information. The proposed method is evaluated on the basis of the OTB benchmark and the VOT2016 benchmark. Compared with other trackers, the results appear to be competitive.

  20. Advanced Engineering Technology for Measuring Performance.

    PubMed

    Rutherford, Drew N; D'Angelo, Anne-Lise D; Law, Katherine E; Pugh, Carla M

    2015-08-01

    The demand for competency-based assessments in surgical training is growing. Use of advanced engineering technology for clinical skills assessment allows for objective measures of hands-on performance. Clinical performance can be assessed in several ways via quantification of an assessee's hand movements (motion tracking), direction of visual attention (eye tracking), levels of stress (physiologic marker measurements), and location and pressure of palpation (force measurements). Innovations in video recording technology and qualitative analysis tools allow for a combination of observer- and technology-based assessments. Overall the goal is to create better assessments of surgical performance with robust validity evidence. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Principal axis-based correspondence between multiple cameras for people tracking.

    PubMed

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  2. Person detection, tracking and following using stereo camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping

    2018-04-01

    Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  3. Intelligent video storage of visual evidences on site in fast deployment

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Bastide, Arnaud; Delaigle, Jean-Francois

    2004-07-01

    In this article we present a generic, flexible, scalable and robust approach for an intelligent real-time forensic visual system. The proposed implementation could be rapidly deployable and integrates minimum logistic support as it embeds low complexity devices (PCs and cameras) that communicate through wireless network. The goal of these advanced tools is to provide intelligent video storage of potential video evidences for fast intervention during deployment around a hazardous sector after a terrorism attack, a disaster, an air crash or before attempt of it. Advanced video analysis tools, such as segmentation and tracking are provided to support intelligent storage and annotation.

  4. Robust control of dielectric elastomer diaphragm actuator for human pulse signal tracking

    NASA Astrophysics Data System (ADS)

    Ye, Zhihang; Chen, Zheng; Asmatulu, Ramazan; Chan, Hoyin

    2017-08-01

    Human pulse signal tracking is an emerging technology that is needed in traditional Chinese medicine. However, soft actuation with multi-frequency tracking capability is needed for tracking human pulse signal. Dielectric elastomer (DE) is one type of soft actuating that has great potential in human pulse signal tracking. In this paper, a DE diaphragm actuator was designed and fabricated to track human pulse pressure signal. A physics-based and control-oriented model has been developed to capture the dynamic behavior of DE diaphragm actuator. Using the physical model, an H-infinity robust control was designed for the actuator to reject high-frequency sensing noises and disturbances. The robust control was then implemented in real-time to track a multi-frequency signal, which verified the tracking capability and robustness of the control system. In the human pulse signal tracking test, a human pulse signal was measured at the City University of Hong Kong and then was tracked using DE actuator at Wichita State University in the US. Experimental results have verified that the DE actuator with its robust control is capable of tracking human pulse signal.

  5. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    PubMed Central

    Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua

    2014-01-01

    To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252

  6. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    PubMed

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  7. Eye-Hand Synergy and Intermittent Behaviors during Target-Directed Tracking with Visual and Non-visual Information

    PubMed Central

    Huang, Chien-Ting; Hwang, Ing-Shiou

    2012-01-01

    Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498

  8. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  9. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  10. Visualization and tracking of tumour extracellular vesicle delivery and RNA translation using multiplexed reporters

    PubMed Central

    Lai, Charles P.; Kim, Edward Y.; Badr, Christian E.; Weissleder, Ralph; Mempel, Thorsten R.; Tannous, Bakhos A.; Breakefield, Xandra O.

    2015-01-01

    Accurate spatiotemporal assessment of extracellular vesicle (EV) delivery and cargo RNA translation requires specific and robust live-cell imaging technologies. Here we engineer optical reporters to label multiple EV populations for visualization and tracking of tumour EV release, uptake and exchange between cell populations both in culture and in vivo. Enhanced green fluorescence protein (EGFP) and tandem dimer Tomato (tdTomato) were fused at NH2-termini with a palmitoylation signal (PalmGFP, PalmtdTomato) for EV membrane labelling. To monitor EV-RNA cargo, transcripts encoding PalmtdTomato were tagged with MS2 RNA binding sequences and detected by co-expression of bacteriophage MS2 coat protein fused with EGFP. By multiplexing fluorescent and bioluminescent EV membrane reporters, we reveal the rapid dynamics of both EV uptake and translation of EV-delivered cargo mRNAs in cancer cells that occurred within 1-hour post-horizontal transfer between cells. These studies confirm that EV-mediated communication is dynamic and multidirectional between cells with delivery of functional mRNA. PMID:25967391

  11. Some lessons learned in three years with ADS-33C. [rotorcraft handling qualities specification

    NASA Technical Reports Server (NTRS)

    Key, David L.; Blanken, Chris L.; Hoh, Roger H.

    1993-01-01

    Three years of using the U.S. Army's rotorcraft handling qualities specification, Aeronautical Design Standard - 33, has shown it to be surprisingly robust. It appears to provide an excellent basis for design and for assessment, however, as the subtleties become more well understood, several areas needing refinement became apparent. Three responses to these needs have been documented in this paper: (1) The yaw-axis attitude quickness for hover target acquisition and tracking can be relaxed slightly. (2) Understanding and application of criteria for degraded visual environments needed elaboration. This and some guidelines for testing to obtain visual cue ratings have been documented. (3) The flight test maneuvers were an innovation that turned out to be very valuable. Their extensive use has made it necessary to tighten definitions and testing guidance. This was accomplished for a good visual environment and is underway for degraded visual environments.

  12. Vehicle active steering control research based on two-DOF robust internal model control

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Liu, Yahui; Wang, Fengbo; Bao, Chunjiang; Sun, Qun; Zhao, Youqun

    2016-07-01

    Because of vehicle's external disturbances and model uncertainties, robust control algorithms have obtained popularity in vehicle stability control. The robust control usually gives up performance in order to guarantee the robustness of the control algorithm, therefore an improved robust internal model control(IMC) algorithm blending model tracking and internal model control is put forward for active steering system in order to reach high performance of yaw rate tracking with certain robustness. The proposed algorithm inherits the good model tracking ability of the IMC control and guarantees robustness to model uncertainties. In order to separate the design process of model tracking from the robustness design process, the improved 2 degree of freedom(DOF) robust internal model controller structure is given from the standard Youla parameterization. Simulations of double lane change maneuver and those of crosswind disturbances are conducted for evaluating the robust control algorithm, on the basis of a nonlinear vehicle simulation model with a magic tyre model. Results show that the established 2-DOF robust IMC method has better model tracking ability and a guaranteed level of robustness and robust performance, which can enhance the vehicle stability and handling, regardless of variations of the vehicle model parameters and the external crosswind interferences. Contradiction between performance and robustness of active steering control algorithm is solved and higher control performance with certain robustness to model uncertainties is obtained.

  13. Using Visual Odometry to Estimate Position and Attitude

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark

    2007-01-01

    A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.

  14. Eye Tracking Dysfunction in Schizophrenia: Characterization and Pathophysiology

    PubMed Central

    Sereno, Anne B.; Gooding, Diane C.; O’Driscoll, Gilllian A.

    2011-01-01

    Eye tracking dysfunction (ETD) is one of the most widely replicated behavioral deficits in schizophrenia and is over-represented in clinically unaffected first-degree relatives of schizophrenia patients. Here, we provide an overview of research relevant to the characterization and pathophysiology of this impairment. Deficits are most robust in the maintenance phase of pursuit, particularly during the tracking of predictable target movement. Impairments are also found in pursuit initiation and correlate with performance on tests of motion processing, implicating early sensory processing of motion signals. Taken together, the evidence suggests that ETD involves higher-order structures, including the frontal eye fields, which adjust the gain of the pursuit response to visual and anticipated target movement, as well as early parts of the pursuit pathway, including motion areas (the middle temporal area and the adjacent medial superior temporal area). Broader application of localizing behavioral paradigms in patient and family studies would be advantageous for refining the eye tracking phenotype for genetic studies. PMID:21312405

  15. Predicting 2D target velocity cannot help 2D motion integration for smooth pursuit initiation.

    PubMed

    Montagnini, Anna; Spering, Miriam; Masson, Guillaume S

    2006-12-01

    Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.

  16. A Reliable and Real-Time Tracking Method with Color Distribution

    PubMed Central

    Zhao, Zishu; Han, Yuqi; Xu, Tingfa; Li, Xiangmin; Song, Haiping; Luo, Jiqiang

    2017-01-01

    Occlusion is a challenging problem in visual tracking. Therefore, in recent years, many trackers have been explored to solve this problem, but most of them cannot track the target in real time because of the heavy computational cost. A spatio-temporal context (STC) tracker was proposed to accelerate the task by calculating context information in the Fourier domain, alleviating the performance in handling occlusion. In this paper, we take advantage of the high efficiency of the STC tracker and employ salient prior model information based on color distribution to improve the robustness. Furthermore, we exploit a scale pyramid for accurate scale estimation. In particular, a new high-confidence update strategy and a re-searching mechanism are used to avoid the model corruption and handle occlusion. Extensive experimental results demonstrate our algorithm outperforms several state-of-the-art algorithms on the OTB2015 dataset. PMID:28994748

  17. Visual tracking based on the sparse representation of the PCA subspace

    NASA Astrophysics Data System (ADS)

    Chen, Dian-bing; Zhu, Ming; Wang, Hui-li

    2017-09-01

    We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.

  18. Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking

    PubMed Central

    Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun

    2017-01-01

    Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876

  19. The Current State and TRL Assessment of People Tracking Technology for Video Surveillance Applications

    DTIC Science & Technology

    2014-09-01

    the feature-space used to represent the target. Sometimes we trade off keeping information about one domain of the target in exchange for robustness... Kullback - Leibler distance), can be used as a similarity function between a candidate target and a template. This approach is invariant to changes in scale...basis vectors to adapt to appearance change and learns the visual information that the set of targets have in common, which is used to reduce the

  20. Application of unscented Kalman filter for robust pose estimation in image-guided surgery

    NASA Astrophysics Data System (ADS)

    Vaccarella, Alberto; De Momi, Elena; Valenti, Marta; Ferrigno, Giancarlo; Enquobahrie, Andinet

    2012-02-01

    Image-guided surgery (IGS) allows clinicians to view current, intra-operative scenes superimposed on preoperative images (typically MRI or CT scans). IGS systems use localization systems to track and visualize surgical tools overlaid on top of preoperative images of the patient during surgery. The most commonly used localization systems in the Operating Rooms (OR) are optical tracking systems (OTS) due to their ease of use and cost effectiveness. However, OTS' suffer from the major drawback of line-of-sight requirements. State space approaches based on different implementations of the Kalman filter have recently been investigated in order to compensate for short line-of-sight occlusion. However, the proposed parameterizations for the rigid body orientation suffer from singularities at certain values of rotation angles. The purpose of this work is to develop a quaternion-based Unscented Kalman Filter (UKF) for robust optical tracking of both position and orientation of surgical tools in order to compensate marker occlusion issues. This paper presents preliminary results towards a Kalman-based Sensor Management Engine (SME). The engine will filter and fuse multimodal tracking streams of data. This work was motivated by our experience working in robot-based applications for keyhole neurosurgery (ROBOCAST project). The algorithm was evaluated using real data from NDI Polaris tracker. The results show that our estimation technique is able to compensate for marker occlusion with a maximum error of 2.5° for orientation and 2.36 mm for position. The proposed approach will be useful in over-crowded state-of-the-art ORs where achieving continuous visibility of all tracked objects will be difficult.

  1. Online two-stage association method for robust multiple people tracking

    NASA Astrophysics Data System (ADS)

    Lv, Jingqin; Fang, Jiangxiong; Yang, Jie

    2011-07-01

    Robust multiple people tracking is very important for many applications. It is a challenging problem due to occlusion and interaction in crowded scenarios. This paper proposes an online two-stage association method for robust multiple people tracking. In the first stage, short tracklets generated by linking people detection responses grow longer by particle filter based tracking, with detection confidence embedded into the observation model. And, an examining scheme runs at each frame for the reliability of tracking. In the second stage, multiple people tracking is achieved by linking tracklets to generate trajectories. An online tracklet association method is proposed to solve the linking problem, which allows applications in time-critical scenarios. This method is evaluated on the popular CAVIAR dataset. The experimental results show that our two-stage method is robust.

  2. A comparison of visual and kinesthetic-tactual displays for compensatory tracking

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.

    1983-01-01

    Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under certain conditions it can be an effective alternative or supplement to visual displays. In order to understand better how KT tracking compares with visual tracking, both a critical tracking and stationary single-axis tracking tasks were conducted with and without velocity quickening. In the critical tracking task, the visual displays were superior, however, the quickened KT display was approximately equal to the unquickened visual display. In stationary tracking tasks, subjects adopted lag equalization with the quickened KT and visual displays, and mean-squared error scores were approximately equal. With the unquickened displays, subjects adopted lag-lead equalization, and the visual displays were superior. This superiority was partly due to the servomotor lag in the implementation of the KT display and partly due to modality differences.

  3. A comparison of tracking with visual and kinesthetic-tactual displays

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.

    1981-01-01

    Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under appropriate conditions it may be an effective means of providing visual workload relief. In order to better understand how KT tracking differs from visual tracking, both a critical tracking task and stationary single-axis tracking tasks were conducted with and without velocity quickening. On the critical tracking task, the visual displays were superior; however, the KT quickened display was approximately equal to the visual unquickened display. Mean squared error scores in the stationary tracking tasks for the visual and KT displays were approximately equal in the quickened conditions, and the describing functions were very similar. In the unquickened conditions, the visual display was superior. Subjects using the unquickened KT display exhibited a low frequency lead-lag that may be related to sensory adaptation.

  4. Motivation and short-term memory in visual search: Attention's accelerator revisited.

    PubMed

    Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton

    2018-05-01

    A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Robust visual tracking via multiple discriminative models with object proposals

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin

    2018-04-01

    Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.

  6. Robust fiber clustering of cerebral fiber bundles in white matter

    NASA Astrophysics Data System (ADS)

    Yao, Xufeng; Wang, Yongxiong; Zhuang, Songlin

    2014-11-01

    Diffusion tensor imaging fiber tracking (DTI-FT) has been widely accepted in the diagnosis and treatment of brain diseases. During the rendering pipeline of specific fiber tracts, the image noise and low resolution of DTI would lead to false propagations. In this paper, we propose a robust fiber clustering (FC) approach to diminish false fibers from one fiber tract. Our algorithm consists of three steps. Firstly, the optimized fiber assignment continuous tracking (FACT) is implemented to reconstruct one fiber tract; and then each curved fiber in the fiber tract is mapped to a point by kernel principal component analysis (KPCA); finally, the point clouds of fiber tract are clustered by hierarchical clustering which could distinguish false fibers from true fibers in one tract. In our experiment, the corticospinal tract (CST) in one case of human data in vivo was used to validate our method. Our method showed reliable capability in decreasing the false fibers in one tract. In conclusion, our method could effectively optimize the visualization of fiber bundles and would help a lot in the field of fiber evaluation.

  7. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors

    PubMed Central

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-01-01

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight. PMID:28025524

  8. Vision-based localization for on-orbit servicing of a partially cooperative satellite

    NASA Astrophysics Data System (ADS)

    Oumer, Nassir W.; Panin, Giorgio; Mülbauer, Quirin; Tseneklidou, Anastasia

    2015-12-01

    This paper proposes ground-in-the-loop, model-based visual localization system based on transmitted images to ground, to aid rendezvous and docking maneuvers between a servicer and a target satellite. In particular, we assume to deal with a partially cooperative target, i.e. passive and without fiducial markers, but supposed at least to keep a controlled attitude, up to small fluctuations, so that the approach mainly involves translational motion. For the purpose of localization, video cameras provide an effective and relatively inexpensive solution, working at a wide range of distances with an increasing accuracy and robustness during the approach. However, illumination conditions in space are especially challenging, due to the direct sunlight exposure and to the glossy surface of a satellite, that creates strong reflections and saturations and therefore a high level of background clutter and missing detections. We employ a monocular camera for mid-range tracking (20 - 5 m) and stereo camera at close-range (5 - 0.5 m), with the respective detection and tracking methods, both using intensity edges and robustly dealing with the above issues. Our tracking system has been extensively verified at the facility of the European Proximity Operations Simulator (EPOS) of DLR, which is a very realistic ground simulation able to reproduce sunlight conditions through a high power floodlight source, satellite surface properties using multilayer insulation foils, as well as orbital motion trajectories with ground-truth data, by means of two 6 DOF industrial robots. Results from this large dataset show the effectiveness and robustness of our method against the above difficulties.

  9. Robust output tracking control of a laboratory helicopter for automatic landing

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Lu, Geng; Zhong, Yisheng

    2014-11-01

    In this paper, robust output tracking control problem of a laboratory helicopter for automatic landing in high seas is investigated. The motion of the helicopter is required to synchronise with that of an oscillating platform, e.g. the deck of a vessel subject to wave-induced motions. A robust linear time-invariant output feedback controller consisting of a nominal controller and a robust compensator is designed. The robust compensator is introduced to restrain the influences of parametric uncertainties, nonlinearities and external disturbances. It is shown that robust stability and robust tracking property can be achieved simultaneously. Experimental results on the laboratory helicopter for automatic landing demonstrate the effectiveness of the designed control approach.

  10. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    NASA Astrophysics Data System (ADS)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  11. Two-dimensional hidden semantic information model for target saliency detection and eyetracking identification

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao

    2018-01-01

    Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.

  12. Neural network robust tracking control with adaptive critic framework for uncertain nonlinear systems.

    PubMed

    Wang, Ding; Liu, Derong; Zhang, Yun; Li, Hongyi

    2018-01-01

    In this paper, we aim to tackle the neural robust tracking control problem for a class of nonlinear systems using the adaptive critic technique. The main contribution is that a neural-network-based robust tracking control scheme is established for nonlinear systems involving matched uncertainties. The augmented system considering the tracking error and the reference trajectory is formulated and then addressed under adaptive critic optimal control formulation, where the initial stabilizing controller is not needed. The approximate control law is derived via solving the Hamilton-Jacobi-Bellman equation related to the nominal augmented system, followed by closed-loop stability analysis. The robust tracking control performance is guaranteed theoretically via Lyapunov approach and also verified through simulation illustration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Multi-object tracking of human spermatozoa

    NASA Astrophysics Data System (ADS)

    Sørensen, Lauge; Østergaard, Jakob; Johansen, Peter; de Bruijne, Marleen

    2008-03-01

    We propose a system for tracking of human spermatozoa in phase-contrast microscopy image sequences. One of the main aims of a computer-aided sperm analysis (CASA) system is to automatically assess sperm quality based on spermatozoa motility variables. In our case, the problem of assessing sperm quality is cast as a multi-object tracking problem, where the objects being tracked are the spermatozoa. The system combines a particle filter and Kalman filters for robust motion estimation of the spermatozoa tracks. Further, the combinatorial aspect of assigning observations to labels in the particle filter is formulated as a linear assignment problem solved using the Hungarian algorithm on a rectangular cost matrix, making the algorithm capable of handling missing or spurious observations. The costs are calculated using hidden Markov models that express the plausibility of an observation being the next position in the track history of the particle labels. Observations are extracted using a scale-space blob detector utilizing the fact that the spermatozoa appear as bright blobs in a phase-contrast microscope. The output of the system is the complete motion track of each of the spermatozoa. Based on these tracks, different CASA motility variables can be computed, for example curvilinear velocity or straight-line velocity. The performance of the system is tested on three different phase-contrast image sequences of varying complexity, both by visual inspection of the estimated spermatozoa tracks and by measuring the mean squared error (MSE) between the estimated spermatozoa tracks and manually annotated tracks, showing good agreement.

  14. Motion-based prediction explains the role of tracking in motion extrapolation.

    PubMed

    Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U

    2013-11-01

    During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated. Then, during tracking, trajectory estimation is robust to blanks even in the presence of relatively high levels of noise. Moreover, we found that tracking is necessary for motion extrapolation, this calls for further experimental work exploring the role of noise in motion extrapolation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. A comparison of kinesthetic-tactual and visual displays via a critical tracking task. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.

    1979-01-01

    The feasibility of using the critical tracking task to evaluate kinesthetic-tactual displays was examined. The test subjects were asked to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. The results indicate that the critical tracking task is both a feasible and a reliable methodology for assessing tactual tracking. Further, that the critical tracking methodology is as sensitive and valid a measure of tactual tracking as visual tracking is demonstrated by the approximately equal effects of quickening for the tactual and visual displays.

  16. Robust adaptive tracking control for nonholonomic mobile manipulator with uncertainties.

    PubMed

    Peng, Jinzhu; Yu, Jie; Wang, Jie

    2014-07-01

    In this paper, mobile manipulator is divided into two subsystems, that is, nonholonomic mobile platform subsystem and holonomic manipulator subsystem. First, the kinematic controller of the mobile platform is derived to obtain a desired velocity. Second, regarding the coupling between the two subsystems as disturbances, Lyapunov functions of the two subsystems are designed respectively. Third, a robust adaptive tracking controller is proposed to deal with the unknown upper bounds of parameter uncertainties and disturbances. According to the Lyapunov stability theory, the derived robust adaptive controller guarantees global stability of the closed-loop system, and the tracking errors and adaptive coefficient errors are all bounded. Finally, simulation results show that the proposed robust adaptive tracking controller for nonholonomic mobile manipulator is effective and has good tracking capacity. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Robust Visual Tracking Revisited: From Correlation Filter to Template Matching.

    PubMed

    Liu, Fanghui; Gong, Chen; Huang, Xiaolin; Zhou, Tao; Yang, Jie; Tao, Dacheng

    2018-06-01

    In this paper, we propose a novel matching based tracker by investigating the relationship between template matching and the recent popular correlation filter based trackers (CFTs). Compared to the correlation operation in CFTs, a sophisticated similarity metric termed mutual buddies similarity is proposed to exploit the relationship of multiple reciprocal nearest neighbors for target matching. By doing so, our tracker obtains powerful discriminative ability on distinguishing target and background as demonstrated by both empirical and theoretical analyses. Besides, instead of utilizing single template with the improper updating scheme in CFTs, we design a novel online template updating strategy named memory, which aims to select a certain amount of representative and reliable tracking results in history to construct the current stable and expressive template set. This scheme is beneficial for the proposed tracker to comprehensively understand the target appearance variations, recall some stable results. Both qualitative and quantitative evaluations on two benchmarks suggest that the proposed tracking method performs favorably against some recently developed CFTs and other competitive trackers.

  18. Video-based measurements for wireless capsule endoscope tracking

    NASA Astrophysics Data System (ADS)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  19. Nonlinear robust control of hypersonic aircrafts with interactions between flight dynamics and propulsion systems.

    PubMed

    Li, Zhaoying; Zhou, Wenjie; Liu, Hao

    2016-09-01

    This paper addresses the nonlinear robust tracking controller design problem for hypersonic vehicles. This problem is challenging due to strong coupling between the aerodynamics and the propulsion system, and the uncertainties involved in the vehicle dynamics including parametric uncertainties, unmodeled model uncertainties, and external disturbances. By utilizing the feedback linearization technique, a linear tracking error system is established with prescribed references. For the linear model, a robust controller is proposed based on the signal compensation theory to guarantee that the tracking error dynamics is robustly stable. Numerical simulation results are given to show the advantages of the proposed nonlinear robust control method, compared to the robust loop-shaping control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  1. Robust infrared targets tracking with covariance matrix representation

    NASA Astrophysics Data System (ADS)

    Cheng, Jian

    2009-07-01

    Robust infrared target tracking is an important and challenging research topic in many military and security applications, such as infrared imaging guidance, infrared reconnaissance, scene surveillance, etc. To effectively tackle the nonlinear and non-Gaussian state estimation problems, particle filtering is introduced to construct the theory framework of infrared target tracking. Under this framework, the observation probabilistic model is one of main factors for infrared targets tracking performance. In order to improve the tracking performance, covariance matrices are introduced to represent infrared targets with the multi-features. The observation probabilistic model can be constructed by computing the distance between the reference target's and the target samples' covariance matrix. Because the covariance matrix provides a natural tool for integrating multiple features, and is scale and illumination independent, target representation with covariance matrices can hold strong discriminating ability and robustness. Two experimental results demonstrate the proposed method is effective and robust for different infrared target tracking, such as the sensor ego-motion scene, and the sea-clutter scene.

  2. Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search

    PubMed Central

    Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.

    2017-01-01

    In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073

  3. Robust visual object tracking with interleaved segmentation

    NASA Astrophysics Data System (ADS)

    Abel, Peter; Kieritz, Hilke; Becker, Stefan; Arens, Michael

    2017-10-01

    In this paper we present a new approach for tracking non-rigid, deformable objects by means of merging an on-line boosting-based tracker and a fast foreground background segmentation. We extend an on-line boosting- based tracker, which uses axes-aligned bounding boxes with fixed aspect-ratio as tracking states. By constructing a confidence map from the on-line boosting-based tracker and unifying this map with a confidence map, which is obtained from a foreground background segmentation algorithm, we build a superior confidence map. For constructing a rough confidence map of a new frame based on on-line boosting, we employ the responses of the strong classifier as well as the single weak classifier responses that were built before during the updating step. This confidence map provides a rough estimation of the object's position and dimension. In order to refine this confidence map, we build a fine, pixel-wisely segmented confidence map and merge both maps together. Our segmentation method is color-histogram-based and provides a fine and fast image segmentation. By means of back-projection and the Bayes' rule, we obtain a confidence value for every pixel. The rough and the fine confidence maps are merged together by building an adaptively weighted sum of both maps. The weights are obtained by utilizing the variances of both confidence maps. Further, we apply morphological operators in the merged confidence map in order to reduce the noise. In the resulting map we estimate the object localization and dimension via continuous adaptive mean shift. Our approach provides a rotated rectangle as tracking states, which enables a more precise description of non-rigid, deformable objects than axes-aligned bounding boxes. We evaluate our tracker on the visual object tracking (VOT) benchmark dataset 2016.

  4. Model reference tracking control of an aircraft: a robust adaptive approach

    NASA Astrophysics Data System (ADS)

    Tanyer, Ilker; Tatlicioglu, Enver; Zergeroglu, Erkan

    2017-05-01

    This work presents the design and the corresponding analysis of a nonlinear robust adaptive controller for model reference tracking of an aircraft that has parametric uncertainties in its system matrices and additive state- and/or time-dependent nonlinear disturbance-like terms in its dynamics. Specifically, robust integral of the sign of the error feedback term and an adaptive term is fused with a proportional integral controller. Lyapunov-based stability analysis techniques are utilised to prove global asymptotic convergence of the output tracking error. Extensive numerical simulations are presented to illustrate the performance of the proposed robust adaptive controller.

  5. Visualization of the Invisible, Explanation of the Unknown, Ruggedization of the Unstable: Sensitivity Analysis, Virtual Tryout and Robust Design through Systematic Stochastic Simulation

    NASA Astrophysics Data System (ADS)

    Zwickl, Titus; Carleer, Bart; Kubli, Waldemar

    2005-08-01

    In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology. We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process — in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings. Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.

  6. The role of visual attention in multiple object tracking: evidence from ERPs.

    PubMed

    Doran, Matthew M; Hoffman, James E

    2010-01-01

    We examined the role of visual attention in the multiple object tracking (MOT) task by measuring the amplitude of the N1 component of the event-related potential (ERP) to probe flashes presented on targets, distractors, or empty background areas. We found evidence that visual attention enhances targets and suppresses distractors (Experiment 1 & 3). However, we also found that when tracking load was light (two targets and two distractors), accurate tracking could be carried out without any apparent contribution from the visual attention system (Experiment 2). Our results suggest that attentional selection during MOT is flexibly determined by task demands as well as tracking load and that visual attention may not always be necessary for accurate tracking.

  7. Visual attention is required for multiple object tracking.

    PubMed

    Tran, Annie; Hoffman, James E

    2016-12-01

    In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Robust tracking and distributed synchronization control of a multi-motor servomechanism with H-infinity performance.

    PubMed

    Wang, Minlin; Ren, Xuemei; Chen, Qiang

    2018-01-01

    The multi-motor servomechanism (MMS) is a multi-variable, high coupling and nonlinear system, which makes the controller design challenging. In this paper, an adaptive robust H-infinity control scheme is proposed to achieve both the load tracking and multi-motor synchronization of MMS. This control scheme consists of two parts: a robust tracking controller and a distributed synchronization controller. The robust tracking controller is constructed by incorporating a neural network (NN) K-filter observer into the dynamic surface control, while the distributed synchronization controller is designed by combining the mean deviation coupling control strategy with the distributed technique. The proposed control scheme has several merits: 1) by using the mean deviation coupling synchronization control strategy, the tracking controller and the synchronization controller can be designed individually without any coupling problem; 2) the immeasurable states and unknown nonlinearities are handled by a NN K-filter observer, where the number of NN weights is largely reduced by using the minimal learning parameter technique; 3) the H-infinity performances of tracking error and synchronization error are guaranteed by introducing a robust term into the tracking controller and the synchronization controller, respectively. The stabilities of the tracking and synchronization control systems are analyzed by the Lyapunov theory. Simulation and experimental results based on a four-motor servomechanism are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Two visual systems in monitoring of dynamic traffic: effects of visual disruption.

    PubMed

    Zheng, Xianjun Sam; McConkie, George W

    2010-05-01

    Studies from neurophysiology and neuropsychology provide support for two separate object- and location-based visual systems, ventral and dorsal. In the driving context, a study was conducted using a change detection paradigm to explore drivers' ability to monitor the dynamic traffic flow, and the effects of visual disruption on these two visual systems. While driving, a discrete change, such as vehicle location, color, or identity, was occasionally made in one of the vehicles on the road ahead of the driver. Experiment results show that without visual disruption, all changes were detected very well; yet, these equally perceivable changes were disrupted differently by a brief blank display (150 ms): the detection of location changes was especially reduced. The disruption effects were also bigger for the parked vehicle compared to the moving ones. The findings support the different roles for two visual systems in monitoring the dynamic traffic: the "where", dorsal system, tracks vehicle spatiotemporal information on perceptual level, encoding information in a coarse and transient manner; whereas the "what", ventral system, monitors vehicles' featural information, encoding information more accurately and robustly. Both systems work together contributing to the driver's situation awareness of traffic. Benefits and limitations of using the driving simulation are also discussed. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  10. Towards accurate localization: long- and short-term correlation filters for tracking

    NASA Astrophysics Data System (ADS)

    Li, Minglangjun; Tian, Chunna

    2018-04-01

    Visual tracking is a challenging problem, especially using a single model. In this paper, we propose a discriminative correlation filter (DCF) based tracking approach that exploits both the long-term and short-term information of the target, named LSTDCF, to improve the tracking performance. In addition to a long-term filter learned through the whole sequence, a short-term filter is trained using only features extracted from most recent frames. The long-term filter tends to capture more semantics of the target as more frames are used for training. However, since the target may undergo large appearance changes, features extracted around the target in non-recent frames prevent the long-term filter from locating the target in the current frame accurately. In contrast, the short-term filter learns more spatial details of the target from recent frames but gets over-fitting easily. Thus the short-term filter is less robust to handle cluttered background and prone to drift. We take the advantage of both filters and fuse their response maps to make the final estimation. We evaluate our approach on a widely-used benchmark with 100 image sequences and achieve state-of-the-art results.

  11. A Robust Inner and Outer Loop Control Method for Trajectory Tracking of a Quadrotor

    PubMed Central

    Xia, Dunzhu; Cheng, Limei; Yao, Yanhong

    2017-01-01

    In order to achieve the complicated trajectory tracking of quadrotor, a geometric inner and outer loop control scheme is presented. The outer loop generates the desired rotation matrix for the inner loop. To improve the response speed and robustness, a geometric SMC controller is designed for the inner loop. The outer loop is also designed via sliding mode control (SMC). By Lyapunov theory and cascade theory, the closed-loop system stability is guaranteed. Next, the tracking performance is validated by tracking three representative trajectories. Then, the robustness of the proposed control method is illustrated by trajectory tracking in presence of model uncertainty and disturbances. Subsequently, experiments are carried out to verify the method. In the experiment, ultra wideband (UWB) is used for indoor positioning. Extended Kalman Filter (EKF) is used for fusing inertial measurement unit (IMU) and UWB measurements. The experimental results show the feasibility of the designed controller in practice. The comparative experiments with PD and PD loop demonstrate the robustness of the proposed control method. PMID:28925984

  12. Multivariable manual control with simultaneous visual and auditory presentation of information. [for improved compensatory tracking performance of human operator

    NASA Technical Reports Server (NTRS)

    Uhlemann, H.; Geiser, G.

    1975-01-01

    Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.

  13. An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments

    NASA Astrophysics Data System (ADS)

    Bagheri, Zahra M.; Cazzolato, Benjamin S.; Grainger, Steven; O'Carroll, David C.; Wiederman, Steven D.

    2017-08-01

    Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from ‘small target motion detector’ neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system.

  14. Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex

    PubMed Central

    Jeong, Su Keun

    2016-01-01

    The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been associated with human occipital and temporal cortices, here we show, by measuring fMRI response patterns, that a region in the human parietal cortex can robustly represent task-relevant object identities. These representations are invariant to changes in a host of visual features, such as viewpoint, and reflect an abstract level of representation that has not previously been reported in the human parietal cortex. Critically, these neural representations are behaviorally relevant as they closely track the perceived object identities. Human parietal cortex thus participates in the moment-to-moment goal-directed visual information representation in the brain. PMID:26843642

  15. Direct adaptive robust tracking control for 6 DOF industrial robot with enhanced accuracy.

    PubMed

    Yin, Xiuxing; Pan, Li

    2018-01-01

    A direct adaptive robust tracking control is proposed for trajectory tracking of 6 DOF industrial robot in the presence of parametric uncertainties, external disturbances and uncertain nonlinearities. The controller is designed based on the dynamic characteristics in the working space of the end-effector of the 6 DOF robot. The controller includes robust control term and model compensation term that is developed directly based on the input reference or desired motion trajectory. A projection-type parametric adaptation law is also designed to compensate for parametric estimation errors for the adaptive robust control. The feasibility and effectiveness of the proposed direct adaptive robust control law and the associated projection-type parametric adaptation law have been comparatively evaluated based on two 6 DOF industrial robots. The test results demonstrate that the proposed control can be employed to better maintain the desired trajectory tracking even in the presence of large parametric uncertainties and external disturbances as compared with PD controller and nonlinear controller. The parametric estimates also eventually converge to the real values along with the convergence of tracking errors, which further validate the effectiveness of the proposed parametric adaption law. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Intelligent robust control for uncertain nonlinear time-varying systems and its application to robotic systems.

    PubMed

    Chang, Yeong-Chan

    2005-12-01

    This paper addresses the problem of designing adaptive fuzzy-based (or neural network-based) robust controls for a large class of uncertain nonlinear time-varying systems. This class of systems can be perturbed by plant uncertainties, unmodeled perturbations, and external disturbances. Nonlinear H(infinity) control technique incorporated with adaptive control technique and VSC technique is employed to construct the intelligent robust stabilization controller such that an H(infinity) control is achieved. The problem of the robust tracking control design for uncertain robotic systems is employed to demonstrate the effectiveness of the developed robust stabilization control scheme. Therefore, an intelligent robust tracking controller for uncertain robotic systems in the presence of high-degree uncertainties can easily be implemented. Its solution requires only to solve a linear algebraic matrix inequality and a satisfactorily transient and asymptotical tracking performance is guaranteed. A simulation example is made to confirm the performance of the developed control algorithms.

  17. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  18. How Visual Search Relates to Visual Diagnostic Performance: A Narrative Systematic Review of Eye-Tracking Research in Radiology

    ERIC Educational Resources Information Center

    van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.

    2017-01-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…

  19. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.

    PubMed

    Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-08-23

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  20. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    PubMed Central

    Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-01-01

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520

  1. 78 FR 12825 - Petition for Extension of Waiver of Compliance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-25

    ... the frequency of the required visual track inspections. FRA issued the initial waiver that granted.... SEPTA requests an extension of approval to reduce the frequency of required, visual track inspections... with continuous welded rail. SEPTA proposes to conduct one visual track inspection per week, instead of...

  2. Technical Report of Successful Deployment of Tandem Visual Tracking During Live Laparoscopic Cholecystectomy Between Novice and Expert Surgeon.

    PubMed

    Puckett, Yana; Baronia, Benedicto C

    2016-09-20

    With the recent advances in eye tracking technology, it is now possible to track surgeons' eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis.

  3. Global motion compensated visual attention-based video watermarking

    NASA Astrophysics Data System (ADS)

    Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.

  4. Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.

    PubMed

    Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E

    2013-08-01

    Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Evaluation of kinesthetic-tactual displays using a critical tracking task

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.; Ault, R. T.

    1977-01-01

    The study sought to investigate the feasibility of applying the critical tracking task paradigm to the evaluation of kinesthetic-tactual displays. Four subjects attempted to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. Display aiding was introduced in both modalities in the form of velocity quickening. Visual tracking performance was better than tactual tracking, and velocity aiding improved the critical tracking scores for visual and tactual tracking about equally. The results suggest that the critical task methodology holds considerable promise for evaluating kinesthetic-tactual displays.

  6. Automated 3D trajectory measuring of large numbers of moving particles.

    PubMed

    Wu, Hai Shan; Zhao, Qi; Zou, Danping; Chen, Yan Qiu

    2011-04-11

    Complex dynamics of natural particle systems, such as insect swarms, bird flocks, fish schools, has attracted great attention of scientists for years. Measuring 3D trajectory of each individual in a group is vital for quantitative study of their dynamic properties, yet such empirical data is rare mainly due to the challenges of maintaining the identities of large numbers of individuals with similar visual features and frequent occlusions. We here present an automatic and efficient algorithm to track 3D motion trajectories of large numbers of moving particles using two video cameras. Our method solves this problem by formulating it as three linear assignment problems (LAP). For each video sequence, the first LAP obtains 2D tracks of moving targets and is able to maintain target identities in the presence of occlusions; the second one matches the visually similar targets across two views via a novel technique named maximum epipolar co-motion length (MECL), which is not only able to effectively reduce matching ambiguity but also further diminish the influence of frequent occlusions; the last one links 3D track segments into complete trajectories via computing a globally optimal assignment based on temporal and kinematic cues. Experiment results on simulated particle swarms with various particle densities validated the accuracy and robustness of the proposed method. As real-world case, our method successfully acquired 3D flight paths of fruit fly (Drosophila melanogaster) group comprising hundreds of freely flying individuals. © 2011 Optical Society of America

  7. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  8. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  9. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  10. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  11. Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset

    NASA Astrophysics Data System (ADS)

    Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi

    2017-11-01

    Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.

  12. Unsupervised real-time speaker identification for daily movies

    NASA Astrophysics Data System (ADS)

    Li, Ying; Kuo, C.-C. Jay

    2002-07-01

    The problem of identifying speakers for movie content analysis is addressed in this paper. While most previous work on speaker identification was carried out in a supervised mode using pure audio data, more robust results can be obtained in real-time by integrating knowledge from multiple media sources in an unsupervised mode. In this work, both audio and visual cues will be employed and subsequently combined in a probabilistic framework to identify speakers. Particularly, audio information is used to identify speakers with a maximum likelihood (ML)-based approach while visual information is adopted to distinguish speakers by detecting and recognizing their talking faces based on face detection/recognition and mouth tracking techniques. Moreover, to accommodate for speakers' acoustic variations along time, we update their models on the fly by adapting to their newly contributed speech data. Encouraging results have been achieved through extensive experiments, which shows a promising future of the proposed audiovisual-based unsupervised speaker identification system.

  13. An Immersive VR System for Sports Education

    NASA Astrophysics Data System (ADS)

    Song, Peng; Xu, Shuhong; Fong, Wee Teck; Chin, Ching Ling; Chua, Gim Guan; Huang, Zhiyong

    The development of new technologies has undoubtedly promoted the advances of modern education, among which Virtual Reality (VR) technologies have made the education more visually accessible for students. However, classroom education has been the focus of VR applications whereas not much research has been done in promoting sports education using VR technologies. In this paper, an immersive VR system is designed and implemented to create a more intuitive and visual way of teaching tennis. A scalable system architecture is proposed in addition to the hardware setup layout, which can be used for various immersive interactive applications such as architecture walkthroughs, military training simulations, other sports game simulations, interactive theaters, and telepresent exhibitions. Realistic interaction experience is achieved through accurate and robust hybrid tracking technology, while the virtual human opponent is animated in real time using shader-based skin deformation. Potential future extensions are also discussed to improve the teaching/learning experience.

  14. An Automated Mouse Tail Vascular Access System by Vision and Pressure Feedback.

    PubMed

    Chang, Yen-Chi; Berry-Pusey, Brittany; Yasin, Rashid; Vu, Nam; Maraglia, Brandon; Chatziioannou, Arion X; Tsao, Tsu-Chin

    2015-08-01

    This paper develops an automated vascular access system (A-VAS) with novel vision-based vein and needle detection methods and real-time pressure feedback for murine drug delivery. Mouse tail vein injection is a routine but critical step for preclinical imaging applications. Due to the small vein diameter and external disturbances such as tail hair, pigmentation, and scales, identifying vein location is difficult and manual injections usually result in poor repeatability. To improve the injection accuracy, consistency, safety, and processing time, A-VAS was developed to overcome difficulties in vein detection noise rejection, robustness in needle tracking, and visual servoing integration with the mechatronics system.

  15. Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Resources?

    ERIC Educational Resources Information Center

    Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.

    2005-01-01

    Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…

  16. Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Liu, Hua; Wu, Wen

    2017-03-31

    Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states' error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF's strong robustness and SSRCKF's high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking.

  17. Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Liu, Hua; Wu, Wen

    2017-01-01

    Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states’ error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF’s strong robustness and SSRCKF’s high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking. PMID:28362347

  18. Technical Report of Successful Deployment of Tandem Visual Tracking During Live Laparoscopic Cholecystectomy Between Novice and Expert Surgeon

    PubMed Central

    Baronia, Benedicto C

    2016-01-01

    With the recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis. PMID:27774359

  19. Predicting Visual Distraction Using Driving Performance Data

    PubMed Central

    Kircher, Katja; Ahlstrom, Christer

    2010-01-01

    Behavioral variables are often used as performance indicators (PIs) of visual or internal distraction induced by secondary tasks. The objective of this study is to investigate whether visual distraction can be predicted by driving performance PIs in a naturalistic setting. Visual distraction is here defined by a gaze based real-time distraction detection algorithm called AttenD. Seven drivers used an instrumented vehicle for one month each in a small scale field operational test. For each of the visual distraction events detected by AttenD, seven PIs such as steering wheel reversal rate and throttle hold were calculated. Corresponding data were also calculated for time periods during which the drivers were classified as attentive. For each PI, means between distracted and attentive states were calculated using t-tests for different time-window sizes (2 – 40 s), and the window width with the smallest resulting p-value was selected as optimal. Based on the optimized PIs, logistic regression was used to predict whether the drivers were attentive or distracted. The logistic regression resulted in predictions which were 76 % correct (sensitivity = 77 % and specificity = 76 %). The conclusion is that there is a relationship between behavioral variables and visual distraction, but the relationship is not strong enough to accurately predict visual driver distraction. Instead, behavioral PIs are probably best suited as complementary to eye tracking based algorithms in order to make them more accurate and robust. PMID:21050615

  20. Close to real-time robust pedestrian detection and tracking

    NASA Astrophysics Data System (ADS)

    Lipetski, Y.; Loibner, G.; Sidla, O.

    2015-03-01

    Fully automated video based pedestrian detection and tracking is a challenging task with many practical and important applications. We present our work aimed to allow robust and simultaneously close to real-time tracking of pedestrians. The presented approach is stable to occlusions, lighting conditions and is generalized to be applied on arbitrary video data. The core tracking approach is built upon tracking-by-detections principle. We describe our cascaded HOG detector with successive CNN verification in detail. For the tracking and re-identification task, we did an extensive analysis of appearance based features as well as their combinations. The tracker was tested on many hours of video data for different scenarios; the results are presented and discussed.

  1. The seam visual tracking method for large structures

    NASA Astrophysics Data System (ADS)

    Bi, Qilin; Jiang, Xiaomin; Liu, Xiaoguang; Cheng, Taobo; Zhu, Yulong

    2017-10-01

    In this paper, a compact and flexible weld visual tracking method is proposed. Firstly, there was the interference between the visual device and the work-piece to be welded when visual tracking height cannot change. a kind of weld vision system with compact structure and tracking height is researched. Secondly, according to analyze the relative spatial pose between the camera, the laser and the work-piece to be welded and study with the theory of relative geometric imaging, The mathematical model between image feature parameters and three-dimensional trajectory of the assembly gap to be welded is established. Thirdly, the visual imaging parameters of line structured light are optimized by experiment of the weld structure of the weld. Fourth, the interference that line structure light will be scatters at the bright area of metal and the area of surface scratches will be bright is exited in the imaging. These disturbances seriously affect the computational efficiency. The algorithm based on the human eye visual attention mechanism is used to extract the weld characteristics efficiently and stably. Finally, in the experiment, It is verified that the compact and flexible weld tracking method has the tracking accuracy of 0.5mm in the tracking of large structural parts. It is a wide range of industrial application prospects.

  2. A distributed database view of network tracking systems

    NASA Astrophysics Data System (ADS)

    Yosinski, Jason; Paffenroth, Randy

    2008-04-01

    In distributed tracking systems, multiple non-collocated trackers cooperate to fuse local sensor data into a global track picture. Generating this global track picture at a central location is fairly straightforward, but the single point of failure and excessive bandwidth requirements introduced by centralized processing motivate the development of decentralized methods. In many decentralized tracking systems, trackers communicate with their peers via a lossy, bandwidth-limited network in which dropped, delayed, and out of order packets are typical. Oftentimes the decentralized tracking problem is viewed as a local tracking problem with a networking twist; we believe this view can underestimate the network complexities to be overcome. Indeed, a subsequent 'oversight' layer is often introduced to detect and handle track inconsistencies arising from a lack of robustness to network conditions. We instead pose the decentralized tracking problem as a distributed database problem, enabling us to draw inspiration from the vast extant literature on distributed databases. Using the two-phase commit algorithm, a well known technique for resolving transactions across a lossy network, we describe several ways in which one may build a distributed multiple hypothesis tracking system from the ground up to be robust to typical network intricacies. We pay particular attention to the dissimilar challenges presented by network track initiation vs. maintenance and suggest a hybrid system that balances speed and robustness by utilizing two-phase commit for only track initiation transactions. Finally, we present simulation results contrasting the performance of such a system with that of more traditional decentralized tracking implementations.

  3. Improved design of constrained model predictive tracking control for batch processes against unknown uncertainties.

    PubMed

    Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong

    2017-07-01

    In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  5. HyMoTrack: A Mobile AR Navigation System for Complex Indoor Environments.

    PubMed

    Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes

    2015-12-24

    Navigating in unknown big indoor environments with static 2D maps is a challenge, especially when time is a critical factor. In order to provide a mobile assistant, capable of supporting people while navigating in indoor locations, an accurate and reliable localization system is required in almost every corner of the building. We present a solution to this problem through a hybrid tracking system specifically designed for complex indoor spaces, which runs on mobile devices like smartphones or tablets. The developed algorithm only uses the available sensors built into standard mobile devices, especially the inertial sensors and the RGB camera. The combination of multiple optical tracking technologies, such as 2D natural features and features of more complex three-dimensional structures guarantees the robustness of the system. All processing is done locally and no network connection is needed. State-of-the-art indoor tracking approaches use mainly radio-frequency signals like Wi-Fi or Bluetooth for localizing a user. In contrast to these approaches, the main advantage of the developed system is the capability of delivering a continuous 3D position and orientation of the mobile device with centimeter accuracy. This makes it usable for localization and 3D augmentation purposes, e.g. navigation tasks or location-based information visualization.

  6. Optimization of illumination schemes in a head-mounted display integrated with eye tracking capabilities

    NASA Astrophysics Data System (ADS)

    Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.

    2005-08-01

    Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.

  7. HyMoTrack: A Mobile AR Navigation System for Complex Indoor Environments

    PubMed Central

    Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes

    2015-01-01

    Navigating in unknown big indoor environments with static 2D maps is a challenge, especially when time is a critical factor. In order to provide a mobile assistant, capable of supporting people while navigating in indoor locations, an accurate and reliable localization system is required in almost every corner of the building. We present a solution to this problem through a hybrid tracking system specifically designed for complex indoor spaces, which runs on mobile devices like smartphones or tablets. The developed algorithm only uses the available sensors built into standard mobile devices, especially the inertial sensors and the RGB camera. The combination of multiple optical tracking technologies, such as 2D natural features and features of more complex three-dimensional structures guarantees the robustness of the system. All processing is done locally and no network connection is needed. State-of-the-art indoor tracking approaches use mainly radio-frequency signals like Wi-Fi or Bluetooth for localizing a user. In contrast to these approaches, the main advantage of the developed system is the capability of delivering a continuous 3D position and orientation of the mobile device with centimeter accuracy. This makes it usable for localization and 3D augmentation purposes, e.g. navigation tasks or location-based information visualization. PMID:26712755

  8. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  9. Robust memory of where from way back when: evidence from behaviour and visual attention.

    PubMed

    Bauer, Patricia J; Stewart, Rebekah; Sirkin, Ruth E; Larkina, Marina

    2017-09-01

    Retention of events typically exhibits a sharp initial decrease followed by levelling off of forgetting. In an apparent exception to this general rule, college students have robust memory for their own locations in obscured versions of photographs of their entering classes taken during orientation-related activities, whether tested 2 months or 42 months after the event. Experiment 1 of the present research was a test for conceptual replication of this finding in photographs depicting more than twice the number of students (and thus potential distracters). There was no difference in memory accuracy for personal spatial location across retention intervals of 6-30 months. Experiment 2 featured 40-h and 2-month retention intervals, thereby providing a more fine-grained test of the forgetting function. The findings replicated Experiment 1. In Experiment 3, eye-tracking measures of visual attention revealed that participants rapidly fixated their own spatial locations within the photographs, even in the absence of explicit awareness. In all three experiments, memory for temporal features of the orientation activities (e.g., day and time the photograph was taken) followed the typical forgetting function. The findings suggest differential preservation of episodic memory for where relative to other aspects of events and experiences, such as when.

  10. Eye-Tracking in the Study of Visual Expertise: Methodology and Approaches in Medicine

    ERIC Educational Resources Information Center

    Fox, Sharon E.; Faulkner-Jones, Beverly E.

    2017-01-01

    Eye-tracking is the measurement of eye motions and point of gaze of a viewer. Advances in this technology have been essential to our understanding of many forms of visual learning, including the development of visual expertise. In recent years, these studies have been extended to the medical professions, where eye-tracking technology has helped us…

  11. Fast Deep Tracking via Semi-Online Domain Adaptation

    NASA Astrophysics Data System (ADS)

    Li, Xiaoping; Luo, Wenbing; Zhu, Yi; Li, Hanxi; Wang, Mingwen

    2018-04-01

    Deep tracking has been illustrating overwhelming superiorities over the shallow methods. Unfortunately, it also suffers from low FPS rates. To alleviate the problem, a number of real-time deep trackers have been proposed via removing the online updating procedure on the CNN model. However, the absent of the online update leads to a significant drop on tracking accuracy. In this work, we propose to perform the domain adaptation for visual tracking in two stages for transferring the information from the visual tracking domain and the instance domain respectively. In this way, the proposed visual tracker achieves comparable tracking accuracy to the state-of-the-art trackers and runs at real-time speed on an average consuming GPU.

  12. The use of microperimetry in assessing visual function in age-related macular degeneration.

    PubMed

    Cassels, Nicola K; Wild, John M; Margrain, Tom H; Chong, Victor; Acton, Jennifer H

    Microperimetry is a novel technique for assessing visual function that appears particularly suitable for age-related macular degeneration (AMD). Compared with standard automated perimetry, microperimetry offers several unique features. It simultaneously images the fundus, incorporates an eye-tracking system to correct the stimulus location for fixation loss, and identifies any preferred retinal loci. We identified 52 articles that met the inclusion criteria for a systematic review of microperimetry in the assessment of visual function in AMD. We discuss microperimetry and AMD in relation to disease severity, structural imaging outcomes, other measures of visual function, and evaluation of the efficacy of surgical and/or medical therapies in clinical trials. The evidence for the use of microperimetry in the functional assessment of AMD is encouraging. Disruptions of the ellipsoid zone band and retinal pigment epithelium are clearly associated with reduced differential light sensitivity despite the maintenance of good visual acuity. Reduced differential light sensitivity is also associated with outer segment thinning and retinal pigment epithelium thickening in early AMD and with both a thickening and a thinning of the whole retina in choroidal neovascularization. Microperimetry, however, lacks the robust diffuse and focal loss age-corrected probability analyses associated with standard automated perimetry, and the technique is currently limited by this omission. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. 49 CFR 214.509 - Required visual illumination and reflective devices for new on-track roadway maintenance machines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... devices for new on-track roadway maintenance machines. 214.509 Section 214.509 Transportation Other... TRANSPORTATION RAILROAD WORKPLACE SAFETY On-Track Roadway Maintenance Machines and Hi-Rail Vehicles § 214.509 Required visual illumination and reflective devices for new on-track roadway maintenance machines. Each new...

  14. On-patient see-through augmented reality based on visual SLAM.

    PubMed

    Mahmoud, Nader; Grasa, Óscar G; Nicolau, Stéphane A; Doignon, Christophe; Soler, Luc; Marescaux, Jacques; Montiel, J M M

    2017-01-01

    An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.

  15. A Bilateral Advantage for Storage in Visual Working Memory

    ERIC Educational Resources Information Center

    Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward

    2010-01-01

    Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…

  16. Multi-modal information processing for visual workload relief

    NASA Technical Reports Server (NTRS)

    Burke, M. W.; Gilson, R. D.; Jagacinski, R. J.

    1980-01-01

    The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority.

  17. Distributed Time-Varying Formation Robust Tracking for General Linear Multiagent Systems With Parameter Uncertainties and External Disturbances.

    PubMed

    Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang

    2017-05-18

    This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.

  18. Within-Hemifield Competition in Early Visual Areas Limits the Ability to Track Multiple Objects with Attention

    PubMed Central

    Alvarez, George A.; Cavanagh, Patrick

    2014-01-01

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651

  19. Enhanced robust fractional order proportional-plus-integral controller based on neural network for velocity control of permanent magnet synchronous motor.

    PubMed

    Zhang, Bitao; Pi, YouGuo

    2013-07-01

    The traditional integer order proportional-integral-differential (IO-PID) controller is sensitive to the parameter variation or/and external load disturbance of permanent magnet synchronous motor (PMSM). And the fractional order proportional-integral-differential (FO-PID) control scheme based on robustness tuning method is proposed to enhance the robustness. But the robustness focuses on the open-loop gain variation of controlled plant. In this paper, an enhanced robust fractional order proportional-plus-integral (ERFOPI) controller based on neural network is proposed. The control law of the ERFOPI controller is acted on a fractional order implement function (FOIF) of tracking error but not tracking error directly, which, according to theory analysis, can enhance the robust performance of system. Tuning rules and approaches, based on phase margin, crossover frequency specification and robustness rejecting gain variation, are introduced to obtain the parameters of ERFOPI controller. And the neural network algorithm is used to adjust the parameter of FOIF. Simulation and experimental results show that the method proposed in this paper not only achieve favorable tracking performance, but also is robust with regard to external load disturbance and parameter variation. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  20. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    PubMed Central

    2011-01-01

    Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated with controlling an affected arm make the motor system more prone to slack when distracted. Providing an alternate sensory channel for feedback, i.e., auditory feedback of tracking error, enabled the participants to simultaneously perform the tracking task and distracter task effectively. Thus, incorporating real-time auditory feedback of performance errors might improve clinical outcomes of robotic therapy systems. PMID:21513561

  1. Expansion of direction space around the cardinal axes revealed by smooth pursuit eye movements.

    PubMed

    Krukowski, Anton E; Stone, Leland S

    2005-01-20

    It is well established that perceptual direction discrimination shows an oblique effect; thresholds are higher for motion along diagonal directions than for motion along cardinal directions. Here, we compare simultaneous direction judgments and pursuit responses for the same motion stimuli and find that both pursuit and perceptual thresholds show similar anisotropies. The pursuit oblique effect is robust under a wide range of experimental manipulations, being largely resistant to changes in trajectory (radial versus tangential motion), speed (10 versus 25 deg/s), directional uncertainty (blocked versus randomly interleaved), and cognitive state (tracking alone versus concurrent tracking and perceptual tasks). Our data show that the pursuit oblique effect is caused by an effective expansion of direction space surrounding the cardinal directions and the requisite compression of space for other directions. This expansion suggests that the directions around the cardinal directions are in some way overrepresented in the visual cortical pathways that drive both smooth pursuit and perception.

  2. Expansion of direction space around the cardinal axes revealed by smooth pursuit eye movements

    NASA Technical Reports Server (NTRS)

    Krukowski, Anton E.; Stone, Leland S.

    2005-01-01

    It is well established that perceptual direction discrimination shows an oblique effect; thresholds are higher for motion along diagonal directions than for motion along cardinal directions. Here, we compare simultaneous direction judgments and pursuit responses for the same motion stimuli and find that both pursuit and perceptual thresholds show similar anisotropies. The pursuit oblique effect is robust under a wide range of experimental manipulations, being largely resistant to changes in trajectory (radial versus tangential motion), speed (10 versus 25 deg/s), directional uncertainty (blocked versus randomly interleaved), and cognitive state (tracking alone versus concurrent tracking and perceptual tasks). Our data show that the pursuit oblique effect is caused by an effective expansion of direction space surrounding the cardinal directions and the requisite compression of space for other directions. This expansion suggests that the directions around the cardinal directions are in some way overrepresented in the visual cortical pathways that drive both smooth pursuit and perception.

  3. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    NASA Astrophysics Data System (ADS)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  4. Establishing the fundamentals for an elephant early warning and monitoring system.

    PubMed

    Zeppelzauer, Matthias; Stoeger, Angela S

    2015-09-04

    The decline of habitat for elephants due to expanding human activity is a serious conservation problem. This has continuously escalated the human-elephant conflict in Africa and Asia. Elephants make extensive use of powerful infrasonic calls (rumbles) that travel distances of up to several kilometers. This makes elephants well-suited for acoustic monitoring because it enables detecting elephants even if they are out of sight. In sight, their distinct visual appearance makes them a good candidate for visual monitoring. We provide an integrated overview of our interdisciplinary project that established the scientific fundamentals for a future early warning and monitoring system for humans who regularly experience serious conflict with elephants. We first draw the big picture of an early warning and monitoring system, then review the developed solutions for automatic acoustic and visual detection, discuss specific challenges and present open future work necessary to build a robust and reliable early warning and monitoring system that is able to operate in situ. We present a method for the automated detection of elephant rumbles that is robust to the diverse noise sources present in situ. We evaluated the method on an extensive set of audio data recorded under natural field conditions. Results show that the proposed method outperforms existing approaches and accurately detects elephant rumbles. Our visual detection method shows that tracking elephants in wildlife videos (of different sizes and postures) is feasible and particularly robust at near distances. From our project results we draw a number of conclusions that are discussed and summarized. We clearly identified the most critical challenges and necessary improvements of the proposed detection methods and conclude that our findings have the potential to form the basis for a future automated early warning system for elephants. We discuss challenges that need to be solved and summarize open topics in the context of a future early warning and monitoring system. We conclude that a long-term evaluation of the presented methods in situ using real-time prototypes is the most important next step to transfer the developed methods into practical implementation.

  5. Feature-based interference from unattended visual field during attentional tracking in younger and older adults.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2011-02-01

    The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.

  6. Human image tracking technique applied to remote collaborative environments

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Suzuki, Gen

    1993-10-01

    To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.

  7. A magnetic tether system to investigate visual and olfactory mediated flight control in Drosophila.

    PubMed

    Duistermars, Brian J; Frye, Mark

    2008-11-21

    It has been clear for many years that insects use visual cues to stabilize their heading in a wind stream. Many animals track odors carried in the wind. As such, visual stabilization of upwind tracking directly aids in odor tracking. But do olfactory signals directly influence visual tracking behavior independently from wind cues? Also, the recent deluge of research on the neurophysiology and neurobehavioral genetics of olfaction in Drosophila has motivated ever more technically sophisticated and quantitative behavioral assays. Here, we modified a magnetic tether system originally devised for vision experiments by equipping the arena with narrow laminar flow odor plumes. A fly is glued to a small steel pin and suspended in a magnetic field that enables it to yaw freely. Small diameter food odor plumes are directed downward over the fly's head, eliciting stable tracking by a hungry fly. Here we focus on the critical mechanics of tethering, aligning the magnets, devising the odor plume, and confirming stable odor tracking.

  8. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  9. Conditional Stochastic Models in Reduced Space: Towards Efficient Simulation of Tropical Cyclone Precipitation Patterns

    NASA Astrophysics Data System (ADS)

    Dodov, B.

    2017-12-01

    Stochastic simulation of realistic and statistically robust patterns of Tropical Cyclone (TC) induced precipitation is a challenging task. It is even more challenging in a catastrophe modeling context, where tens of thousands of typhoon seasons need to be simulated in order to provide a complete view of flood risk. Ultimately, one could run a coupled global climate model and regional Numerical Weather Prediction (NWP) model, but this approach is not feasible in the catastrophe modeling context and, most importantly, may not provide TC track patterns consistent with observations. Rather, we propose to leverage NWP output for the observed TC precipitation patterns (in terms of downscaled reanalysis 1979-2015) collected on a Lagrangian frame along the historical TC tracks and reduced to the leading spatial principal components of the data. The reduced data from all TCs is then grouped according to timing, storm evolution stage (developing, mature, dissipating, ETC transitioning) and central pressure and used to build a dictionary of stationary (within a group) and non-stationary (for transitions between groups) covariance models. Provided that the stochastic storm tracks with all the parameters describing the TC evolution are already simulated, a sequence of conditional samples from the covariance models chosen according to the TC characteristics at a given moment in time are concatenated, producing a continuous non-stationary precipitation pattern in a Lagrangian framework. The simulated precipitation for each event is finally distributed along the stochastic TC track and blended with a non-TC background precipitation using a data assimilation technique. The proposed framework provides means of efficient simulation (10000 seasons simulated in a couple of days) and robust typhoon precipitation patterns consistent with observed regional climate and visually undistinguishable from high resolution NWP output. The framework is used to simulate a catalog of 10000 typhoon seasons implemented in a flood risk model for Japan.

  10. Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.

    PubMed

    Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D

    2013-10-01

    Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Within-hemifield competition in early visual areas limits the ability to track multiple objects with attention.

    PubMed

    Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick

    2014-08-27

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.

  12. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  13. Geometric reconstruction using tracked ultrasound strain imaging

    NASA Astrophysics Data System (ADS)

    Pheiffer, Thomas S.; Simpson, Amber L.; Ondrake, Janet E.; Miga, Michael I.

    2013-03-01

    The accurate identification of tumor margins during neurosurgery is a primary concern for the surgeon in order to maximize resection of malignant tissue while preserving normal function. The use of preoperative imaging for guidance is standard of care, but tumor margins are not always clear even when contrast agents are used, and so margins are often determined intraoperatively by visual and tactile feedback. Ultrasound strain imaging creates a quantitative representation of tissue stiffness which can be used in real-time. The information offered by strain imaging can be placed within a conventional image-guidance workflow by tracking the ultrasound probe and calibrating the image plane, which facilitates interpretation of the data by placing it within a common coordinate space with preoperative imaging. Tumor geometry in strain imaging is then directly comparable to the geometry in preoperative imaging. This paper presents a tracked ultrasound strain imaging system capable of co-registering with preoperative tomograms and also of reconstructing a 3D surface using the border of the strain lesion. In a preliminary study using four phantoms with subsurface tumors, tracked strain imaging was registered to preoperative image volumes and then tumor surfaces were reconstructed using contours extracted from strain image slices. The volumes of the phantom tumors reconstructed from tracked strain imaging were approximately between 1.5 to 2.4 cm3, which was similar to the CT volumes of 1.0 to 2.3 cm3. Future work will be done to robustly characterize the reconstruction accuracy of the system.

  14. An effective and robust method for tracking multiple fish in video image based on fish head detection.

    PubMed

    Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu

    2016-06-23

    Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.

  15. Compressed multi-block local binary pattern for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao

    2018-04-01

    Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.

  16. Non-Static error tracking control for near space airship loading platform

    NASA Astrophysics Data System (ADS)

    Ni, Ming; Tao, Fei; Yang, Jiandong

    2018-01-01

    A control scheme based on internal model with non-static error is presented against the uncertainty of the near space airship loading platform system. The uncertainty in the tracking table is represented as interval variations in stability and control derivatives. By formulating the tracking problem of the uncertainty system as a robust state feedback stabilization problem of an augmented system, sufficient condition for the existence of robust tracking controller is derived in the form of linear matrix inequality (LMI). Finally, simulation results show that the new method not only has better anti-jamming performance, but also improves the dynamic performance of the high-order systems.

  17. Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator

    NASA Astrophysics Data System (ADS)

    Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi

    Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.

  18. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    PubMed

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  19. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    NASA Astrophysics Data System (ADS)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  20. Parallel computation of level set method for 500 Hz visual servo control

    NASA Astrophysics Data System (ADS)

    Fei, Xianfeng; Igarashi, Yasunobu; Hashimoto, Koichi

    2008-11-01

    We propose a 2D microorganism tracking system using a parallel level set method and a column parallel vision system (CPV). This system keeps a single microorganism in the middle of the visual field under a microscope by visual servoing an automated stage. We propose a new energy function for the level set method. This function constrains an amount of light intensity inside the detected object contour to control the number of the detected objects. This algorithm is implemented in CPV system and computational time for each frame is 2 [ms], approximately. A tracking experiment for about 25 s is demonstrated. Also we demonstrate a single paramecium can be kept tracking even if other paramecia appear in the visual field and contact with the tracked paramecium.

  1. Robust Arm and Hand Tracking by Unsupervised Context Learning

    PubMed Central

    Spruyt, Vincent; Ledda, Alessandro; Philips, Wilfried

    2014-01-01

    Hand tracking in video is an increasingly popular research field due to the rise of novel human-computer interaction methods. However, robust and real-time hand tracking in unconstrained environments remains a challenging task due to the high number of degrees of freedom and the non-rigid character of the human hand. In this paper, we propose an unsupervised method to automatically learn the context in which a hand is embedded. This context includes the arm and any other object that coherently moves along with the hand. We introduce two novel methods to incorporate this context information into a probabilistic tracking framework, and introduce a simple yet effective solution to estimate the position of the arm. Finally, we show that our method greatly increases robustness against occlusion and cluttered background, without degrading tracking performance if no contextual information is available. The proposed real-time algorithm is shown to outperform the current state-of-the-art by evaluating it on three publicly available video datasets. Furthermore, a novel dataset is created and made publicly available for the research community. PMID:25004155

  2. Intelligent robust tracking control for a class of uncertain strict-feedback nonlinear systems.

    PubMed

    Chang, Yeong-Chan

    2009-02-01

    This paper addresses the problem of designing robust tracking controls for a large class of strict-feedback nonlinear systems involving plant uncertainties and external disturbances. The input and virtual input weighting matrices are perturbed by bounded time-varying uncertainties. An adaptive fuzzy-based (or neural-network-based) dynamic feedback tracking controller will be developed such that all the states and signals of the closed-loop system are bounded and the trajectory tracking error should be as small as possible. First, the adaptive approximators with linearly parameterized models are designed, and a partitioned procedure with respect to the developed adaptive approximators is proposed such that the implementation of the fuzzy (or neural network) basis functions depends only on the state variables but does not depend on the tuning approximation parameters. Furthermore, we extend to design the nonlinearly parameterized adaptive approximators. Consequently, the intelligent robust tracking control schemes developed in this paper possess the properties of computational simplicity and easy implementation. Finally, simulation examples are presented to demonstrate the effectiveness of the proposed control algorithms.

  3. Assessing the performance of a motion tracking system based on optical joint transform correlation

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.

    2015-08-01

    We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.

  4. Integrated direct/indirect adaptive robust motion trajectory tracking control of pneumatic cylinders

    NASA Astrophysics Data System (ADS)

    Meng, Deyuan; Tao, Guoliang; Zhu, Xiaocong

    2013-09-01

    This paper studies the precision motion trajectory tracking control of a pneumatic cylinder driven by a proportional-directional control valve. An integrated direct/indirect adaptive robust controller is proposed. The controller employs a physical model based indirect-type parameter estimation to obtain reliable estimates of unknown model parameters, and utilises a robust control method with dynamic compensation type fast adaptation to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. Due to the use of projection mapping, the robust control law and the parameter adaption algorithm can be designed separately. Since the system model uncertainties are unmatched, the recursive backstepping technology is adopted to design the robust control law. Extensive comparative experimental results are presented to illustrate the effectiveness of the proposed controller and its performance robustness to parameter variations and sudden disturbances.

  5. Using eye tracking to test for individual differences in attention to attractive faces

    PubMed Central

    Valuch, Christian; Pflüger, Lena S.; Wallner, Bernard; Laeng, Bruno; Ansorge, Ulrich

    2015-01-01

    We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a) Attractive faces hold or capture attention more effectively than less attractive faces; (b) men show a stronger bias toward attractive opposite-sex faces than women; and (c) blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a) and (b) but not (c). By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach for studying inter-individual differences in visual attention with naturalistic stimuli. PMID:25698993

  6. Using eye tracking to test for individual differences in attention to attractive faces.

    PubMed

    Valuch, Christian; Pflüger, Lena S; Wallner, Bernard; Laeng, Bruno; Ansorge, Ulrich

    2015-01-01

    We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a) Attractive faces hold or capture attention more effectively than less attractive faces; (b) men show a stronger bias toward attractive opposite-sex faces than women; and (c) blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a) and (b) but not (c). By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach for studying inter-individual differences in visual attention with naturalistic stimuli.

  7. Decoding the content of visual short-term memory under distraction in occipital and parietal areas.

    PubMed

    Bettencourt, Katherine C; Xu, Yaoda

    2016-01-01

    Recent studies have provided conflicting accounts regarding where in the human brain visual short-term memory (VSTM) content is stored, with strong univariate fMRI responses being reported in superior intraparietal sulcus (IPS), but robust multivariate decoding being reported in occipital cortex. Given the continuous influx of information in everyday vision, VSTM storage under distraction is often required. We found that neither distractor presence nor predictability during the memory delay affected behavioral performance. Similarly, superior IPS exhibited consistent decoding of VSTM content across all distractor manipulations and had multivariate responses that closely tracked behavioral VSTM performance. However, occipital decoding of VSTM content was substantially modulated by distractor presence and predictability. Furthermore, we found no effect of target-distractor similarity on VSTM behavioral performance, further challenging the role of sensory regions in VSTM storage. Overall, consistent with previous univariate findings, our results indicate that superior IPS, but not occipital cortex, has a central role in VSTM storage.

  8. Ocular dynamics and visual tracking performance after Q-switched laser exposure

    NASA Astrophysics Data System (ADS)

    Zwick, Harry; Stuck, Bruce E.; Lund, David J.; Nawim, Maqsood

    2001-05-01

    In previous investigations of q-switched laser retinal exposure in awake task oriented non-human primates (NHPs), the threshold for retinal damage occurred well below that of the threshold for permanent visual function loss. Visual function measures used in these studies involved measures of visual acuity and contrast sensitivity. In the present study, we examine the same relationship for q-switched laser exposure using a visual performance task, where task dependency involves more parafoveal than foveal retina. NHPs were trained on a visual pursuit motor tracking performance task that required maintaining a small HeNe laser spot (0.3 degrees) centered in a slowly moving (0.5deg/sec) annulus. When NHPs reliably produced visual target tracking efficiencies > 80%, single q-switched laser exposures (7 nsec) were made coaxially with the line of sight of the moving target. An infrared camera imaged the pupil during exposure to obtain the pupillary response to the laser flash. Retinal images were obtained with a scanning laser ophthalmoscope 3 days post exposure under ketamine and nembutol anesthesia. Q-switched visible laser exposures at twice the damage threshold produced small (about 50mm) retinal lesions temporal to the fovea; deficits in NHP visual pursuit tracking were transient, demonstrating full recovery to baseline within a single tracking session. Post exposure analysis of the pupillary response demonstrated that the exposure flash entered the pupil, followed by 90 msec refractory period and than a 12 % pupillary contraction within 1.5 sec from the onset of laser exposure. At 6 times the morphological threshold damage level for 532 nm q-switched exposure, longer term losses in NHP pursuit tracking performance were observed. In summary, q-switched laser exposure appears to have a higher threshold for permanent visual performance loss than the corresponding threshold to produce retinal threshold injury. Mechanisms of neural plasticity within the retina and at higher visual brain centers may mediat

  9. Take a look at the bright side: Effects of positive body exposure on selective visual attention in women with high body dissatisfaction.

    PubMed

    Glashouwer, Klaske A; Jonker, Nienke C; Thomassen, Karen; de Jong, Peter J

    2016-08-01

    Women with high body dissatisfaction look less at their 'beautiful' body parts than their 'ugly' body parts. This study tested the robustness of this selective viewing pattern and examined the influence of positive body exposure on body-dissatisfied women's attention for 'ugly' and 'beautiful' body parts. In women with high body dissatisfaction (N = 28) and women with low body dissatisfaction (N = 14) eye-tracking was used to assess visual attention towards pictures of their own and other women's bodies. Participants with high body dissatisfaction were randomly assigned to 5 weeks positive body exposure (n = 15) or a no-treatment condition (n = 13). Attention bias was assessed again after 5 weeks. Body-dissatisfied women looked longer at 'ugly' than 'beautiful' body parts of themselves and others, while participants with low body dissatisfaction attended equally long to own/others' 'beautiful' and 'ugly' body parts. Although positive body exposure was very effective in improving participants' body satisfaction, it did not systematically change participants' viewing pattern. The tendency to preferentially allocate attention towards one's 'ugly' body parts seems a robust phenomenon in women with body dissatisfaction. Yet, modifying this selective viewing pattern seems not a prerequisite for successfully improving body satisfaction via positive body exposure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Studying visual attention using the multiple object tracking paradigm: A tutorial review.

    PubMed

    Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus

    2017-07-01

    Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.

  11. Before your very eyes: the value and limitations of eye tracking in medical education.

    PubMed

    Kok, Ellen M; Jarodzka, Halszka

    2017-01-01

    Medicine is a highly visual discipline. Physicians from many specialties constantly use visual information in diagnosis and treatment. However, they are often unable to explain how they use this information. Consequently, it is unclear how to train medical students in this visual processing. Eye tracking is a research technique that may offer answers to these open questions, as it enables researchers to investigate such visual processes directly by measuring eye movements. This may help researchers understand the processes that support or hinder a particular learning outcome. In this article, we clarify the value and limitations of eye tracking for medical education researchers. For example, eye tracking can clarify how experience with medical images mediates diagnostic performance and how students engage with learning materials. Furthermore, eye tracking can also be used directly for training purposes by displaying eye movements of experts in medical images. Eye movements reflect cognitive processes, but cognitive processes cannot be directly inferred from eye-tracking data. In order to interpret eye-tracking data properly, theoretical models must always be the basis for designing experiments as well as for analysing and interpreting eye-tracking data. The interpretation of eye-tracking data is further supported by sound experimental design and methodological triangulation. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  12. Visual Contrast Sensitivity Functions Obtained from Untrained Observers Using Tracking and Staircase Procedures. Final Report.

    ERIC Educational Resources Information Center

    Geri, George A.; Hubbard, David C.

    Two adaptive psychophysical procedures (tracking and "yes-no" staircase) for obtaining human visual contrast sensitivity functions (CSF) were evaluated. The procedures were chosen based on their proven validity and the desire to evaluate the practical effects of stimulus transients, since tracking procedures traditionally employ gradual…

  13. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF

    PubMed Central

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  14. Energy Setting and Visual Outcomes in SMILE: A Retrospective Cohort Study.

    PubMed

    Li, Liuyang; Schallhorn, Julie M; Ma, Jiaonan; Cui, Tong; Wang, Yan

    2018-01-01

    To assess the independent effect of energy setting on postoperative uncorrected distance visual acuity (UDVA) in small incision lenticule extraction (SMILE) and further investigate an optimal energy setting for the 4.5-μm spot-track-distance, which is in wide clinical use. A total of 1,130 eyes were included in a retrospective cohort study from Tianjin Eye Hospital, Tianjin Medical University from April 2015 to July 2016. Energy settings and baseline characteristics were recorded and 3-month UDVA was tested by a nurse blinded to the energy settings used. Multiple regression analysis and generalized estimating equations were used to take into account the correlation between the measurements from two eyes. The 3-month UDVA (mean ± standard deviation) of 125 to 160 nJ (by 5-nJ increments) was 1.39 ± 0.19, 1.40 ± 0.32, 1.33 ± 0.27, 1.36 ± 0.27, 1.34 ± 0.25, 1.29 ± 0.19, 1.36 ± 0.27, and 1.19 ± 0.22, respectively. Energy was significantly associated with postoperative logMAR UDVA in different models and the regression coefficient (β) was robust (β = 0.01, 95% confidence interval = 0.00 to 0.01). The regression coefficient β (0.01, 95% confidence interval = 0.00 to 0.02, P = .0029) of energy (125 to 150 nJ, by 5-nJ increments) on 4.5-μm spot-track-distance was still associated with the logMAR UDVA when adjusted for sex, age, myopia, astigmatism, mean keratometry, central corneal thickness, preoperative logMAR CDVA, and side spot-track-distance. The lower end of the energy studied was associated with a better postoperative UDVA in this population. The spot-track-distance of 4.5 μm with 125 nJ energy was the optimal combination within this range. [J Refract Surg. 2018;34(1):11-16.]. Copyright 2018, SLACK Incorporated.

  15. Robust Design of Biological Circuits: Evolutionary Systems Biology Approach

    PubMed Central

    Chen, Bor-Sen; Hsu, Chih-Yuan; Liou, Jing-Jia

    2011-01-01

    Artificial gene circuits have been proposed to be embedded into microbial cells that function as switches, timers, oscillators, and the Boolean logic gates. Building more complex systems from these basic gene circuit components is one key advance for biologic circuit design and synthetic biology. However, the behavior of bioengineered gene circuits remains unstable and uncertain. In this study, a nonlinear stochastic system is proposed to model the biological systems with intrinsic parameter fluctuations and environmental molecular noise from the cellular context in the host cell. Based on evolutionary systems biology algorithm, the design parameters of target gene circuits can evolve to specific values in order to robustly track a desired biologic function in spite of intrinsic and environmental noise. The fitness function is selected to be inversely proportional to the tracking error so that the evolutionary biological circuit can achieve the optimal tracking mimicking the evolutionary process of a gene circuit. Finally, several design examples are given in silico with the Monte Carlo simulation to illustrate the design procedure and to confirm the robust performance of the proposed design method. The result shows that the designed gene circuits can robustly track desired behaviors with minimal errors even with nontrivial intrinsic and external noise. PMID:22187523

  16. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  17. Robust design of biological circuits: evolutionary systems biology approach.

    PubMed

    Chen, Bor-Sen; Hsu, Chih-Yuan; Liou, Jing-Jia

    2011-01-01

    Artificial gene circuits have been proposed to be embedded into microbial cells that function as switches, timers, oscillators, and the Boolean logic gates. Building more complex systems from these basic gene circuit components is one key advance for biologic circuit design and synthetic biology. However, the behavior of bioengineered gene circuits remains unstable and uncertain. In this study, a nonlinear stochastic system is proposed to model the biological systems with intrinsic parameter fluctuations and environmental molecular noise from the cellular context in the host cell. Based on evolutionary systems biology algorithm, the design parameters of target gene circuits can evolve to specific values in order to robustly track a desired biologic function in spite of intrinsic and environmental noise. The fitness function is selected to be inversely proportional to the tracking error so that the evolutionary biological circuit can achieve the optimal tracking mimicking the evolutionary process of a gene circuit. Finally, several design examples are given in silico with the Monte Carlo simulation to illustrate the design procedure and to confirm the robust performance of the proposed design method. The result shows that the designed gene circuits can robustly track desired behaviors with minimal errors even with nontrivial intrinsic and external noise.

  18. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    PubMed

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.

  19. Robust adaptive uniform exact tracking control for uncertain Euler-Lagrange system

    NASA Astrophysics Data System (ADS)

    Yang, Yana; Hua, Changchun; Li, Junpeng; Guan, Xinping

    2017-12-01

    This paper offers a solution to the robust adaptive uniform exact tracking control for uncertain nonlinear Euler-Lagrange (EL) system. An adaptive finite-time tracking control algorithm is designed by proposing a novel nonsingular integral terminal sliding-mode surface. Moreover, a new adaptive parameter tuning law is also developed by making good use of the system tracking errors and the adaptive parameter estimation errors. Thus, both the trajectory tracking and the parameter estimation can be achieved in a guaranteed time adjusted arbitrarily based on practical demands, simultaneously. Additionally, the control result for the EL system proposed in this paper can be extended to high-order nonlinear systems easily. Finally, a test-bed 2-DOF robot arm is set-up to demonstrate the performance of the new control algorithm.

  20. Robust guaranteed cost tracking control of quadrotor UAV with uncertainties.

    PubMed

    Xu, Zhiwei; Nian, Xiaohong; Wang, Haibo; Chen, Yinsheng

    2017-07-01

    In this paper, a robust guaranteed cost controller (RGCC) is proposed for quadrotor UAV system with uncertainties to address set-point tracking problem. A sufficient condition of the existence for RGCC is derived by Lyapunov stability theorem. The designed RGCC not only guarantees the whole closed-loop system asymptotically stable but also makes the quadratic performance level built for the closed-loop system have an upper bound irrespective to all admissible parameter uncertainties. Then, an optimal robust guaranteed cost controller is developed to minimize the upper bound of performance level. Simulation results verify the presented control algorithms possess small overshoot and short setting time, with which the quadrotor has ability to perform set-point tracking task well. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Nested Tracking Graphs

    DOE PAGES

    Lukasczyk, Jonas; Weber, Gunther; Maciejewski, Ross; ...

    2017-06-01

    Tracking graphs are a well established tool in topological analysis to visualize the evolution of components and their properties over time, i.e., when components appear, disappear, merge, and split. However, tracking graphs are limited to a single level threshold and the graphs may vary substantially even under small changes to the threshold. To examine the evolution of features for varying levels, users have to compare multiple tracking graphs without a direct visual link between them. We propose a novel, interactive, nested graph visualization based on the fact that the tracked superlevel set components for different levels are related to eachmore » other through their nesting hierarchy. This approach allows us to set multiple tracking graphs in context to each other and enables users to effectively follow the evolution of components for different levels simultaneously. We show the effectiveness of our approach on datasets from finite pointset methods, computational fluid dynamics, and cosmology simulations.« less

  2. Emerging applications of eye-tracking technology in dermatology.

    PubMed

    John, Kevin K; Jensen, Jakob D; King, Andy J; Pokharel, Manusheela; Grossman, Douglas

    2018-04-06

    Eye-tracking technology has been used within a multitude of disciplines to provide data linking eye movements to visual processing of various stimuli (i.e., x-rays, situational positioning, printed information, and warnings). Despite the benefits provided by eye-tracking in allowing for the identification and quantification of visual attention, the discipline of dermatology has yet to see broad application of the technology. Notwithstanding dermatologists' heavy reliance upon visual patterns and cues to discriminate between benign and atypical nevi, literature that applies eye-tracking to the study of dermatology is sparse; and literature specific to patient-initiated behaviors, such as skin self-examination (SSE), is largely non-existent. The current article provides a review of eye-tracking research in various medical fields, culminating in a discussion of current applications and advantages of eye-tracking for dermatology research. Copyright © 2018 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.

  3. Robust leader-follower formation tracking control of multiple underactuated surface vessels

    NASA Astrophysics Data System (ADS)

    Peng, Zhou-hua; Wang, Dan; Lan, Wei-yao; Sun, Gang

    2012-09-01

    This paper is concerned with the formation control problem of multiple underactuated surface vessels moving in a leader-follower formation. The formation is achieved by the follower to track a virtual target defined relative to the leader. A robust adaptive target tracking law is proposed by using neural network and backstepping techniques. The advantage of the proposed control scheme is that the uncertain nonlinear dynamics caused by Coriolis/centripetal forces, nonlinear damping, unmodeled hydrodynamics and disturbances from the environment can be compensated by on line learning. Based on Lyapunov analysis, the proposed controller guarantees the tracking errors converge to a small neighborhood of the origin. Simulation results demonstrate the effectiveness of the control strategy.

  4. Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis

    ERIC Educational Resources Information Center

    Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying

    2012-01-01

    This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…

  5. Frequency encoded auditory display of the critical tracking task

    NASA Technical Reports Server (NTRS)

    Stevenson, J.

    1984-01-01

    The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.

  6. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  7. Common and Innovative Visuals: A sparsity modeling framework for video.

    PubMed

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  8. Storyline Visualizations of Eye Tracking of Movie Viewing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balint, John T.; Arendt, Dustin L.; Blaha, Leslie M.

    Storyline visualizations offer an approach that promises to capture the spatio-temporal characteristics of individual observers and simultaneously illustrate emerging group behaviors. We develop a visual analytics approach to parsing, aligning, and clustering fixation sequences from eye tracking data. Visualization of the results captures the similarities and differences across a group of observers performing a common task. We apply our storyline approach to visualize gaze patterns of people watching dynamic movie clips. Storylines mitigate some of the shortcomings of existent spatio-temporal visualization techniques and, importantly, continue to highlight individual observer behavioral dynamics.

  9. Secondary visual workload capability with primary visual and kinesthetic-tactual displays

    NASA Technical Reports Server (NTRS)

    Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.

    1978-01-01

    Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.

  10. Handheld pose tracking using vision-inertial sensors with occlusion handling

    NASA Astrophysics Data System (ADS)

    Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried

    2016-07-01

    Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.

  11. CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.

    PubMed

    Bray, Mark-Anthony; Carpenter, Anne E

    2015-11-04

    Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.

  12. Intraoperative visualization and assessment of electromagnetic tracking error

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Ungi, Tamas; Lasso, Andras; MacDonald, Andrew; Nanji, Sulaiman; Fichtinger, Gabor

    2015-03-01

    Electromagnetic tracking allows for increased flexibility in designing image-guided interventions, however it is well understood that electromagnetic tracking is prone to error. Visualization and assessment of the tracking error should take place in the operating room with minimal interference with the clinical procedure. The goal was to achieve this ideal in an open-source software implementation in a plug and play manner, without requiring programming from the user. We use optical tracking as a ground truth. An electromagnetic sensor and optical markers are mounted onto a stylus device, pivot calibrated for both trackers. Electromagnetic tracking error is defined as difference of tool tip position between electromagnetic and optical readings. Multiple measurements are interpolated into the thin-plate B-spline transform visualized in real time using 3D Slicer. All tracked devices are used in a plug and play manner through the open-source SlicerIGT and PLUS extensions of the 3D Slicer platform. Tracking error was measured multiple times to assess reproducibility of the method, both with and without placing ferromagnetic objects in the workspace. Results from exhaustive grid sampling and freehand sampling were similar, indicating that a quick freehand sampling is sufficient to detect unexpected or excessive field distortion in the operating room. The software is available as a plug-in for the 3D Slicer platforms. Results demonstrate potential for visualizing electromagnetic tracking error in real time for intraoperative environments in feasibility clinical trials in image-guided interventions.

  13. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  14. Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.

    PubMed

    Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong

    2016-08-01

    The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.

  15. Rover-based visual target tracking validation and mission infusion

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Steele, Robert D.; Ansar, Adnan I.; Ali, Khaled; Nesnas, Issa

    2005-01-01

    The Mars Exploration Rovers (MER'03), Spirit and Opportunity, represent the state of the art in rover operations on Mars. This paper presents validation experiments of different visual tracking algorithms using the rover's navigation camera.

  16. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  17. Nonlinear dynamics support a linear population code in a retinal target-tracking circuit.

    PubMed

    Leonardo, Anthony; Meister, Markus

    2013-10-23

    A basic task faced by the visual system of many organisms is to accurately track the position of moving prey. The retina is the first stage in the processing of such stimuli; the nature of the transformation here, from photons to spike trains, constrains not only the ultimate fidelity of the tracking signal but also the ease with which it can be extracted by other brain regions. Here we demonstrate that a population of fast-OFF ganglion cells in the salamander retina, whose dynamics are governed by a nonlinear circuit, serve to compute the future position of the target over hundreds of milliseconds. The extrapolated position of the target is not found by stimulus reconstruction but is instead computed by a weighted sum of ganglion cell outputs, the population vector average (PVA). The magnitude of PVA extrapolation varies systematically with target size, speed, and acceleration, such that large targets are tracked most accurately at high speeds, and small targets at low speeds, just as is seen in the motion of real prey. Tracking precision reaches the resolution of single photoreceptors, and the PVA algorithm performs more robustly than several alternative algorithms. If the salamander brain uses the fast-OFF cell circuit for target extrapolation as we suggest, the circuit dynamics should leave a microstructure on the behavior that may be measured in future experiments. Our analysis highlights the utility of simple computations that, while not globally optimal, are efficiently implemented and have close to optimal performance over a limited but ethologically relevant range of stimuli.

  18. Space debris tracking based on fuzzy running Gaussian average adaptive particle filter track-before-detect algorithm

    NASA Astrophysics Data System (ADS)

    Torteeka, Peerapong; Gao, Peng-Qi; Shen, Ming; Guo, Xiao-Zhang; Yang, Da-Tao; Yu, Huan-Huan; Zhou, Wei-Ping; Zhao, You

    2017-02-01

    Although tracking with a passive optical telescope is a powerful technique for space debris observation, it is limited by its sensitivity to dynamic background noise. Traditionally, in the field of astronomy, static background subtraction based on a median image technique has been used to extract moving space objects prior to the tracking operation, as this is computationally efficient. The main disadvantage of this technique is that it is not robust to variable illumination conditions. In this article, we propose an approach for tracking small and dim space debris in the context of a dynamic background via one of the optical telescopes that is part of the space surveillance network project, named the Asia-Pacific ground-based Optical Space Observation System or APOSOS. The approach combines a fuzzy running Gaussian average for robust moving-object extraction with dim-target tracking using a particle-filter-based track-before-detect method. The performance of the proposed algorithm is experimentally evaluated, and the results show that the scheme achieves a satisfactory level of accuracy for space debris tracking.

  19. The contributions of visual and central attention to visual working memory.

    PubMed

    Souza, Alessandra S; Oberauer, Klaus

    2017-10-01

    We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.

  20. Normal aging delays and compromises early multifocal visual attention during object tracking.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2013-02-01

    Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.

  1. A robust high-order lattice adaptive notch filter and its application to narrowband noise cancellation

    NASA Astrophysics Data System (ADS)

    Kim, Seong-woo; Park, Young-cheol; Seo, Young-soo; Youn, Dae Hee

    2014-12-01

    In this paper, we propose a high-order lattice adaptive notch filter (LANF) that can robustly track multiple sinusoids. Unlike the conventional cascade structure, the proposed high-order LANF has robust tracking characteristics regardless of the frequencies of reference sinusoids and initial notch frequencies. The proposed high-order LANF is applied to a narrowband adaptive noise cancellation (ANC) to mitigate the effect of the broadband disturbance in the reference signal. By utilizing the gradient adaptive lattice (GAL) ANC algorithm and approximately combining it with the proposed high-order LANF, a computationally efficient narrowband ANC system is obtained. Experimental results demonstrate the robustness of the proposed high-order LANF and the effectiveness of the obtained narrowband ANC system.

  2. Fast incorporation of optical flow into active polygons.

    PubMed

    Unal, Gozde; Krim, Hamid; Yezzi, Anthony

    2005-06-01

    In this paper, we first reconsider, in a different light, the addition of a prediction step to active contour-based visual tracking using an optical flow and clarify the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers. We subsequently detail our contribution of computing an optical flow-based prediction step directly from the parameters of an active polygon, and of exploiting it in object tracking. This is in contrast to an explicitly separate computation of the optical flow and its ad hoc application. It also provides an inherent regularization effect resulting from integrating measurements along polygon edges. As a result, we completely avoid the need of adding ad hoc regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters. This direct integration of optical flow into the active polygon framework distinguishes this technique from most previous contour-based approaches, where regularization terms are theoretically, as well as practically, essential. The greater robustness and speed due to a reduced number of parameters of this technique are additional and appealing features.

  3. A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot

    PubMed Central

    Zhang, Liguo; Sun, Jianguo; Yin, Guisheng; Zhao, Jing; Han, Qilong

    2015-01-01

    In non-destructive testing (NDT) of metal welds, weld line tracking is usually performed outdoors, where the structured light sources are always disturbed by various noises, such as sunlight, shadows, and reflections from the weld line surface. In this paper, we design a cross structured light (CSL) to detect the weld line and propose a robust laser stripe segmentation algorithm to overcome the noises in structured light images. An adaptive monochromatic space is applied to preprocess the image with ambient noises. In the monochromatic image, the laser stripe obtained is recovered as a multichannel signal by minimum entropy deconvolution. Lastly, the stripe centre points are extracted from the image. In experiments, the CSL sensor and the proposed algorithm are applied to guide a wall climbing robot inspecting the weld line of a wind power tower. The experimental results show that the CSL sensor can capture the 3D information of the welds with high accuracy, and the proposed algorithm contributes to the weld line inspection and the robot navigation. PMID:26110403

  4. Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.

    PubMed

    Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W

    2016-12-14

    The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.

  5. PD-L1 is an activation-independent marker of brown adipocytes.

    PubMed

    Ingram, Jessica R; Dougan, Michael; Rashidian, Mohammad; Knoll, Marko; Keliher, Edmund J; Garrett, Sarah; Garforth, Scott; Blomberg, Olga S; Espinosa, Camilo; Bhan, Atul; Almo, Steven C; Weissleder, Ralph; Lodish, Harvey; Dougan, Stephanie K; Ploegh, Hidde L

    2017-09-21

    Programmed death ligand 1 (PD-L1) is expressed on a number of immune and cancer cells, where it can downregulate antitumor immune responses. Its expression has been linked to metabolic changes in these cells. Here we develop a radiolabeled camelid single-domain antibody (anti-PD-L1 VHH) to track PD-L1 expression by immuno-positron emission tomography (PET). PET-CT imaging shows a robust and specific PD-L1 signal in brown adipose tissue (BAT). We confirm expression of PD-L1 on brown adipocytes and demonstrate that signal intensity does not change in response to cold exposure or β-adrenergic activation. This is the first robust method of visualizing murine brown fat independent of its activation state.Current approaches to visualise brown adipose tissue (BAT) rely primarily on markers that reflect its metabolic activity. Here, the authors show that PD-L1 is expressed on brown adipocytes, does not change upon BAT activation, and that BAT volume in mice can be measured by PET-CT with a radiolabeled anti-PD-L1 antibody.

  6. Electromagnetic tracking of motion in the proximity of computer generated graphical stimuli: a tutorial.

    PubMed

    Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael

    2013-09-01

    Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.

  7. Recent results in visual servoing

    NASA Astrophysics Data System (ADS)

    Chaumette, François

    2008-06-01

    Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.

  8. Real-World Application of Robust Design Optimization Assisted by Response Surface Approximation and Visual Data-Mining

    NASA Astrophysics Data System (ADS)

    Shimoyama, Koji; Jeong, Shinkyu; Obayashi, Shigeru

    A new approach for multi-objective robust design optimization was proposed and applied to a real-world design problem with a large number of objective functions. The present approach is assisted by response surface approximation and visual data-mining, and resulted in two major gains regarding computational time and data interpretation. The Kriging model for response surface approximation can markedly reduce the computational time for predictions of robustness. In addition, the use of self-organizing maps as a data-mining technique allows visualization of complicated design information between optimality and robustness in a comprehensible two-dimensional form. Therefore, the extraction and interpretation of trade-off relations between optimality and robustness of design, and also the location of sweet spots in the design space, can be performed in a comprehensive manner.

  9. Correlation Filter Learning Toward Peak Strength for Visual Tracking.

    PubMed

    Sui, Yao; Wang, Guanghui; Zhang, Li

    2018-04-01

    This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.

  10. Stereo-vision system for finger tracking in breast self-examination

    NASA Astrophysics Data System (ADS)

    Zeng, Jianchao; Wang, Yue J.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    Early detection of breast cancer, one of the leading causes of death by cancer for women in the US is key to any strategy designed to reduce breast cancer mortality. Breast self-examination (BSE) is considered as the most cost- effective approach available for early breast cancer detection because it is simple and non-invasive, and a large fraction of breast cancers are actually found by patients using this technique today. In BSE, the patient should use a proper search strategy to cover the whole breast region in order to detect al possible tumors. At present there is no objective approach or clinical data to evaluate the effectiveness of a particular BSE strategy. Even if a particular strategy is determined to be the most effective, training women to use it is still difficult because there is no objective way for them to know whether they are doing it correctly. We have developed a system using vision-based motion tracking technology to gather quantitative data about the breast palpation process for analysis of the BSE technique. By tracking position of the fingers, the system can provide the first objective quantitative data about the BSE process, and thus can improve our knowledge of the technique and help analyze its effectiveness. By visually displaying all the touched position information to the patient as the BSE is being conducted, the system can provide interactive feedback to the patient and create a prototype for a computer-based BSE training system. We propose to use color features, put them on the finger nails and track these features, because in breast palpation the background is the breast itself which is similar to the hand in color. This situation can hinder the ability/efficiency of other features if real time performance is required. To simplify feature extraction process, color transform is utilized instead of RGB values. Although the clinical environment will be well illuminated, normalization of color attributes is applied to compensate for minor changes in illumination. Neighbor search is employed to ensure real time performance, and a three-finger pattern topology is always checked for extracted features to avoid any possible false features. After detecting the features in the images, 3D position parameters of the colored fingers are calculated using the stereo vision principle. In the experiments, a 15 frames/second performance is obtained using an image size of 160 X 120 and an SGI Indy MIPS R4000 workstation. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system can be used to quantify search strategy of the palpation and its documentation. With real-time visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus improve the rate of breast tumor detection.

  11. Hue distinctiveness overrides category in determining performance in multiple object tracking.

    PubMed

    Sun, Mengdan; Zhang, Xuemin; Fan, Lingxia; Hu, Luming

    2018-02-01

    The visual distinctiveness between targets and distractors can significantly facilitate performance in multiple object tracking (MOT), in which color is a feature that has been commonly used. However, the processing of color can be more than "visual." Color is continuous in chromaticity, while it is commonly grouped into discrete categories (e.g., red, green). Evidence from color perception suggested that color categories may have a unique role in visual tasks independent of its chromatic appearance. Previous MOT studies have not examined the effect of chromatic and categorical distinctiveness on tracking separately. The current study aimed to reveal how chromatic (hue) and categorical distinctiveness of color between the targets and distractors affects tracking performance. With four experiments, we showed that tracking performance was largely facilitated by the increasing hue distance between the target set and the distractor set, suggesting that perceptual grouping was formed based on hue distinctiveness to aid tracking. However, we found no color categorical effect, because tracking performance was not significantly different when the targets and distractors were from the same or different categories. It was concluded that the chromatic distinctiveness of color overrides category in determining tracking performance, suggesting a dominant role of perceptual feature in MOT.

  12. Detection of differential viewing patterns to erotic and non-erotic stimuli using eye-tracking methodology.

    PubMed

    Lykins, Amy D; Meana, Marta; Kambe, Gretchen

    2006-10-01

    As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.

  13. Robust Target Tracking with Multi-Static Sensors under Insufficient TDOA Information.

    PubMed

    Shin, Hyunhak; Ku, Bonhwa; Nelson, Jill K; Ko, Hanseok

    2018-05-08

    This paper focuses on underwater target tracking based on a multi-static sonar network composed of passive sonobuoys and an active ping. In the multi-static sonar network, the location of the target can be estimated using TDOA (Time Difference of Arrival) measurements. However, since the sensor network may obtain insufficient and inaccurate TDOA measurements due to ambient noise and other harsh underwater conditions, target tracking performance can be significantly degraded. We propose a robust target tracking algorithm designed to operate in such a scenario. First, track management with track splitting is applied to reduce performance degradation caused by insufficient measurements. Second, a target location is estimated by a fusion of multiple TDOA measurements using a Gaussian Mixture Model (GMM). In addition, the target trajectory is refined by conducting a stack-based data association method based on multiple-frames measurements in order to more accurately estimate target trajectory. The effectiveness of the proposed method is verified through simulations.

  14. Performance Evaluation of a UWB-RFID System for Potential Space Applications

    NASA Technical Reports Server (NTRS)

    Phan, Chan T.; Arndt, D.; Ngo, P.; Gross, J.; Ni, Jianjun; Rafford, Melinda

    2006-01-01

    This talk presents a brief overview of the ultra-wideband (UWB) RFID system with emphasis on the performance evaluation of a commercially available UWB-RFID system. There are many RFID systems available today, but many provide just basic identification for auditing and inventory tracking. For applications that require high precision real time tracking, UWB technology has been shown to be a viable solution. The use of extremely short bursts of RF pulses offers high immunity to interference from other RF systems, precise tracking due to sub-nanosecond time resolution, and robust performance in multipath environments. The UWB-RFID system Sapphire DART (Digital Active RFID & Tracking) will be introduced in this talk. Laboratory testing using Sapphire DART is performed to evaluate its capability such as coverage area, accuracy, ease of operation, and robustness. Performance evaluation of this system in an operational environment (a receiving warehouse) for inventory tracking is also conducted. Concepts of using the UWB-RFID technology to track astronauts and assets are being proposed for space exploration.

  15. Long-term scale adaptive tracking with kernel correlation filters

    NASA Astrophysics Data System (ADS)

    Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui

    2018-04-01

    Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.

  16. A small-scale hyperacute compound eye featuring active eye tremor: application to visual stabilization, target tracking, and short-range odometry.

    PubMed

    Colonnier, Fabien; Manecy, Augustin; Juston, Raphaël; Mallot, Hanspeter; Leitel, Robert; Floreano, Dario; Viollet, Stéphane

    2015-02-25

    In this study, a miniature artificial compound eye (15 mm in diameter) called the curved artificial compound eye (CurvACE) was endowed for the first time with hyperacuity, using similar micro-movements to those occurring in the fly's compound eye. A periodic micro-scanning movement of only a few degrees enables the vibrating compound eye to locate contrasting objects with a 40-fold greater resolution than that imposed by the interommatidial angle. In this study, we developed a new algorithm merging the output of 35 local processing units consisting of adjacent pairs of artificial ommatidia. The local measurements performed by each pair are processed in parallel with very few computational resources, which makes it possible to reach a high refresh rate of 500 Hz. An aerial robotic platform with two degrees of freedom equipped with the active CurvACE placed over naturally textured panels was able to assess its linear position accurately with respect to the environment thanks to its efficient gaze stabilization system. The algorithm was found to perform robustly at different light conditions as well as distance variations relative to the ground and featured small closed-loop positioning errors of the robot in the range of 45 mm. In addition, three tasks of interest were performed without having to change the algorithm: short-range odometry, visual stabilization, and tracking contrasting objects (hands) moving over a textured background.

  17. Robust inertia-free attitude takeover control of postcapture combined spacecraft with guaranteed prescribed performance.

    PubMed

    Luo, Jianjun; Wei, Caisheng; Dai, Honghua; Yin, Zeyang; Wei, Xing; Yuan, Jianping

    2018-03-01

    In this paper, a robust inertia-free attitude takeover control scheme with guaranteed prescribed performance is investigated for postcapture combined spacecraft with consideration of unmeasurable states, unknown inertial property and external disturbance torque. Firstly, to estimate the unavailable angular velocity of combination accurately, a novel finite-time-convergent tracking differentiator is developed with a quite computationally achievable structure free from the unknown nonlinear dynamics of combined spacecraft. Then, a robust inertia-free prescribed performance control scheme is proposed, wherein, the transient and steady-state performance of combined spacecraft is first quantitatively studied by stabilizing the filtered attitude tracking errors. Compared with the existing works, the prominent advantage is that no parameter identifications and no neural or fuzzy nonlinear approximations are needed, which decreases the complexity of robust controller design dramatically. Moreover, the prescribed performance of combined spacecraft is guaranteed a priori without resorting to repeated regulations of the controller parameters. Finally, four illustrative examples are employed to validate the effectiveness of the proposed control scheme and tracking differentiator. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.

    PubMed

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-05-09

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors.

  19. GOATS 2011 Adaptive and Collaborative Exploitation of 3-Dimensional Environmental Acoustics in Distributed Undersea Networks

    DTIC Science & Technology

    2013-09-30

    Figure 13. The Unicorn AUV (yellow track) tracking a static temperature front between 18°C (blue- shaded region) and 19°C (green-shaded region...along the Mid-Atlantic Bight shelf break front in a modified MSEAS ocean model. Unicorn tracked the front southeast over 55 km (as the crow flies...robustness of the front tracking behavior. 15 Figure 14. The Unicorn AUV (yellow track) and Macrura AUV (magenta track) tracking a dynamic

  20. Sound segregation via embedded repetition is robust to inattention.

    PubMed

    Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria

    2016-03-01

    The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).

  1. Object tracking on mobile devices using binary descriptors

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Quraishi, Mohammad Faiz; Minnehan, Breton

    2015-03-01

    With the growing ubiquity of mobile devices, advanced applications are relying on computer vision techniques to provide novel experiences for users. Currently, few tracking approaches take into consideration the resource constraints on mobile devices. Designing efficient tracking algorithms and optimizing performance for mobile devices can result in better and more efficient tracking for applications, such as augmented reality. In this paper, we use binary descriptors, including Fast Retina Keypoint (FREAK), Oriented FAST and Rotated BRIEF (ORB), Binary Robust Independent Features (BRIEF), and Binary Robust Invariant Scalable Keypoints (BRISK) to obtain real time tracking performance on mobile devices. We consider both Google's Android and Apple's iOS operating systems to implement our tracking approach. The Android implementation is done using Android's Native Development Kit (NDK), which gives the performance benefits of using native code as well as access to legacy libraries. The iOS implementation was created using both the native Objective-C and the C++ programing languages. We also introduce simplified versions of the BRIEF and BRISK descriptors that improve processing speed without compromising tracking accuracy.

  2. Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding

    PubMed Central

    Li, Xin; Guo, Rui; Chen, Chao

    2014-01-01

    Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216

  3. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  4. Tracking with the mind's eye

    NASA Technical Reports Server (NTRS)

    Krauzlis, R. J.; Stone, L. S.

    1999-01-01

    The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.

  5. Curating and Integrating Data from Multiple Sources to Support Healthcare Analytics.

    PubMed

    Ng, Kenney; Kakkanatt, Chris; Benigno, Michael; Thompson, Clay; Jackson, Margaret; Cahan, Amos; Zhu, Xinxin; Zhang, Ping; Huang, Paul

    2015-01-01

    As the volume and variety of healthcare related data continues to grow, the analysis and use of this data will increasingly depend on the ability to appropriately collect, curate and integrate disparate data from many different sources. We describe our approach to and highlight our experiences with the development of a robust data collection, curation and integration infrastructure that supports healthcare analytics. This system has been successfully applied to the processing of a variety of data types including clinical data from electronic health records and observational studies, genomic data, microbiomic data, self-reported data from surveys and self-tracked data from wearable devices from over 600 subjects. The curated data is currently being used to support healthcare analytic applications such as data visualization, patient stratification and predictive modeling.

  6. Alcohol and disorientation-related responses. IV, Effects of different alcohol dosages and display illumination tracking performance during vestibular stimulation.

    DOT National Transportation Integrated Search

    1971-07-01

    A previous CAMI laboratory investigation showed that alcohol impairs the ability of men to suppress vestibular nystagmus while visually fixating on a cockpit instrument, thus degrading visual tracking performance (eye-hand coordination) during angula...

  7. Uncovering Everyday Rhythms and Patterns: Food tracking and new forms of visibility and temporality in health care.

    PubMed

    Ruckenstein, Minna

    2015-01-01

    This chapter demonstrates how ethnographically-oriented research on emergent technologies, in this case self-tracking technologies, adds to Techno-Anthropology's aims of understanding techno-engagements and solving problems that deal with human-technology relations within and beyond health informatics. Everyday techno-relations have been a long-standing research interest in anthropology, underlining the necessity of empirical engagement with the ways in which people and technologies co-construct their daily conditions. By focusing on the uses of a food tracking application, MealLogger, designed for photographing meals and visualizing eating rhythms to share with health care professionals, the chapter details how personal data streams support and challenge health care practices. The interviewed professionals, from doctors to nutritionists, have used food tracking for treating patients with eating disorders, weight problems, and mental health issues. In general terms, self-tracking advances the practices of visually and temporally documenting, retrieving, communicating, and understanding physical and mental processes and, by doing so, it offers a new kind of visual mediation. The professionals point out how a visual food journal opens a window onto everyday life, bypassing customary ways of seeing and treating patients, thereby highlighting how self-tracking practices can aid in escaping the clinical gaze by promoting a new kind of communication through visualization and narration. Health care professionals are also, however, acutely aware of the barriers to adopting self-tracking practices as part of existing patient care. The health care system is neither used to, nor comfortable with, personal data that originates outside the system; it is not seen as evidence and its institutional position remains insecure.

  8. Dynamix: dynamic visualization by automatic selection of informative tracks from hundreds of genomic datasets.

    PubMed

    Monfort, Matthias; Furlong, Eileen E M; Girardot, Charles

    2017-07-15

    Visualization of genomic data is fundamental for gaining insights into genome function. Yet, co-visualization of a large number of datasets remains a challenge in all popular genome browsers and the development of new visualization methods is needed to improve the usability and user experience of genome browsers. We present Dynamix, a JBrowse plugin that enables the parallel inspection of hundreds of genomic datasets. Dynamix takes advantage of a priori knowledge to automatically display data tracks with signal within a genomic region of interest. As the user navigates through the genome, Dynamix automatically updates data tracks and limits all manual operations otherwise needed to adjust the data visible on screen. Dynamix also introduces a new carousel view that optimizes screen utilization by enabling users to independently scroll through groups of tracks. Dynamix is hosted at http://furlonglab.embl.de/Dynamix . charles.girardot@embl.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  9. Selective visual scaling of time-scale processes facilitates broadband learning of isometric force frequency tracking.

    PubMed

    King, Adam C; Newell, Karl M

    2015-10-01

    The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.

  10. Delayed visual feedback affects both manual tracking and grip force control when transporting a handheld object.

    PubMed

    Sarlegna, Fabrice R; Baud-Bovy, Gabriel; Danion, Frédéric

    2010-08-01

    When we manipulate an object, grip force is adjusted in anticipation of the mechanical consequences of hand motion (i.e., load force) to prevent the object from slipping. This predictive behavior is assumed to rely on an internal representation of the object dynamic properties, which would be elaborated via visual information before the object is grasped and via somatosensory feedback once the object is grasped. Here we examined this view by investigating the effect of delayed visual feedback during dextrous object manipulation. Adult participants manually tracked a sinusoidal target by oscillating a handheld object whose current position was displayed as a cursor on a screen along with the visual target. A delay was introduced between actual object displacement and cursor motion. This delay was linearly increased (from 0 to 300 ms) and decreased within 2-min trials. As previously reported, delayed visual feedback altered performance in manual tracking. Importantly, although the physical properties of the object remained unchanged, delayed visual feedback altered the timing of grip force relative to load force by about 50 ms. Additional experiments showed that this effect was not due to task complexity nor to manual tracking. A model inspired by the behavior of mass-spring systems suggests that delayed visual feedback may have biased the representation of object dynamics. Overall, our findings support the idea that visual feedback of object motion can influence the predictive control of grip force even when the object is grasped.

  11. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks.

    PubMed

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-03-20

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods.

  12. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks

    PubMed Central

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-01-01

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods. PMID:28335537

  13. Remote Sensing of Martian Terrain Hazards via Visually Salient Feature Detection

    NASA Astrophysics Data System (ADS)

    Al-Milli, S.; Shaukat, A.; Spiteri, C.; Gao, Y.

    2014-04-01

    The main objective of the FASTER remote sensing system is the detection of rocks on planetary surfaces by employing models that can efficiently characterise rocks in terms of semantic descriptions. The proposed technique abates some of the algorithmic limitations of existing methods with no training requirements, lower computational complexity and greater robustness towards visual tracking applications over long-distance planetary terrains. Visual saliency models inspired from biological systems help to identify important regions (such as rocks) in the visual scene. Surface rocks are therefore completely described in terms of their local or global conspicuity pop-out characteristics. These local and global pop-out cues are (but not limited to); colour, depth, orientation, curvature, size, luminance intensity, shape, topology etc. The currently applied methods follow a purely bottom-up strategy of visual attention for selection of conspicuous regions in the visual scene without any topdown control. Furthermore the choice of models used (tested and evaluated) are relatively fast among the state-of-the-art and have very low computational load. Quantitative evaluation of these state-ofthe- art models was carried out using benchmark datasets including the Surrey Space Centre Lab Testbed, Pangu generated images, RAL Space SEEKER and CNES Mars Yard datasets. The analysis indicates that models based on visually salient information in the frequency domain (SRA, SDSR, PQFT) are the best performing ones for detecting rocks in an extra-terrestrial setting. In particular the SRA model seems to be the most optimum of the lot especially that it requires the least computational time while keeping errors competitively low. The salient objects extracted using these models can then be merged with the Digital Elevation Models (DEMs) generated from the same navigation cameras in order to be fused to the navigation map thus giving a clear indication of the rock locations.

  14. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  15. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma.

    PubMed

    Kasneci, Enkelejda; Black, Alex A; Wood, Joanne M

    2017-01-01

    To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.

  16. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma

    PubMed Central

    Black, Alex A.

    2017-01-01

    To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior. PMID:28293433

  17. The semantic category-based grouping in the Multiple Identity Tracking task.

    PubMed

    Wei, Liuqing; Zhang, Xuemin; Li, Zhen; Liu, Jingyao

    2018-01-01

    In the Multiple Identity Tracking (MIT) task, categorical distinctions between targets and distractors have been found to facilitate tracking (Wei, Zhang, Lyu, & Li in Frontiers in Psychology, 7, 589, 2016). The purpose of this study was to further investigate the reasons for the facilitation effect, through six experiments. The results of Experiments 1-3 excluded the potential explanations of visual distinctiveness, attentional distribution strategy, and a working memory mechanism, respectively. When objects' visual information was preserved and categorical information was removed, the facilitation effect disappeared, suggesting that the visual distinctiveness between targets and distractors was not the main reason for the facilitation effect. Moreover, the facilitation effect was not the result of strategically shifting the attentional distribution, because the targets received more attention than the distractors in all conditions. Additionally, the facilitation effect did not come about because the identities of targets were encoded and stored in visual working memory to assist in the recovery from tracking errors; when working memory was disturbed by the object identities changing during tracking, the facilitation effect still existed. Experiments 4 and 5 showed that observers grouped targets together and segregated them from distractors on the basis of their categorical information. By doing this, observers could largely avoid distractor interference with tracking and improve tracking performance. Finally, Experiment 6 indicated that category-based grouping is not an automatic, but a goal-directed and effortful, strategy. In summary, the present findings show that a semantic category-based target-grouping mechanism exists in the MIT task, which is likely to be the major reason for the tracking facilitation effect.

  18. Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors

    PubMed Central

    Everding, Lukas; Conradt, Jörg

    2018-01-01

    In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. PMID:29515386

  19. Incentives from Curriculum Tracking

    ERIC Educational Resources Information Center

    Koerselman, Kristian

    2013-01-01

    Curriculum tracking creates incentives in the years before its start, and we should therefore expect test scores to be higher during those years. I find robust evidence for incentive effects of tracking in the UK based on the UK comprehensive school reform. Results from the Swedish comprehensive school reform are inconclusive. Internationally, I…

  20. A simple and rapid method for high-resolution visualization of single-ion tracks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omichi, Masaaki; Center for Collaborative Research, Anan National College of Technology, Anan, Tokushima 774-0017; Choi, Wookjin

    2014-11-15

    Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA)-N, N’-methylene bisacrylamide (MBAAm) blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic forcemore » microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.« less

  1. The role of vestibular and support-tactile-proprioceptive inputs in visual-manual tracking

    NASA Astrophysics Data System (ADS)

    Kornilova, Ludmila; Naumov, Ivan; Glukhikh, Dmitriy; Khabarova, Ekaterina; Pavlova, Aleksandra; Ekimovskiy, Georgiy; Sagalovitch, Viktor; Smirnov, Yuriy; Kozlovskaya, Inesa

    Sensorimotor disorders in weightlessness are caused by changes of functioning of gravity-dependent systems, first of all - vestibular and support. The question arises, what’s the role and the specific contribution of the support afferentation in the development of observed disorders. To determine the role and effects of vestibular, support, tactile and proprioceptive afferentation on characteristics of visual-manual tracking (VMT) we conducted a comparative analysis of the data obtained after prolonged spaceflight and in a model of weightlessness - horizontal “dry” immersion. Altogether we examined 16 Russian cosmonauts before and after prolonged spaceflights (129-215 days) and 30 subjects who stayed in immersion bath for 5-7 days to evaluate the state of the vestibular function (VF) using videooculography and characteristics of the visual-manual tracking (VMT) using electrooculography & joystick with biological visual feedback. Evaluation of the VF has shown that both after immersion and after prolonged spaceflight there were significant decrease of the static torsional otolith-cervical-ocular reflex (OCOR) and simultaneous significant increase of the dynamic vestibular-cervical-ocular reactions (VCOR) with a revealed negative correlation between parameters of the otoliths and canals reactions, as well as significant changes in accuracy of perception of the subjective visual vertical which correlated with changes in OCOR. Analyze of the VMT has shown that significant disorders of the visual tracking (VT) occurred from the beginning of the immersion up to 3-4 day after while in cosmonauts similar but much more pronounced oculomotor disorders and significant changes from the baseline were observed up to R+9 day postflight. Significant changes of the manual tracking (MT) were revealed only for gain and occurred on 1 and 3 days in immersion while after spaceflight such changes were observed up to R+5 day postflight. We found correlation between characteristics of the VT and MT, between characteristics of the VF and VT and no correlation between VF and MT. It was found that removal of the support and minimization of the proprioceptive afferentation has a greater impact upon accuracy of the VT then accuracy of the MT. Hand tracking accuracy was higher than the eyes for all subjects. The hand’ motor coordination was more stable to changes in support-proprioceptive afferentation then visual tracking. The observed changes in and after immersion are similar but less pronounced with changes observed on cosmonauts after prolonged spaceflight. Keywords: visual-manual tracking, vestibular function, weightlessness, immersion.

  2. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  3. Cortical response tracking the conscious experience of threshold duration visual stimuli indicates visual perception is all or none

    PubMed Central

    Sekar, Krithiga; Findley, William M.; Poeppel, David; Llinás, Rodolfo R.

    2013-01-01

    At perceptual threshold, some stimuli are available for conscious access whereas others are not. Such threshold inputs are useful tools for investigating the events that separate conscious awareness from unconscious stimulus processing. Here, viewing unmasked, threshold-duration images was combined with recording magnetoencephalography to quantify differences among perceptual states, ranging from no awareness to ambiguity to robust perception. A four-choice scale was used to assess awareness: “didn’t see” (no awareness), “couldn’t identify” (awareness without identification), “unsure” (awareness with low certainty identification), and “sure” (awareness with high certainty identification). Stimulus-evoked neuromagnetic signals were grouped according to behavioral response choices. Three main cortical responses were elicited. The earliest response, peaking at ∼100 ms after stimulus presentation, showed no significant correlation with stimulus perception. A late response (∼290 ms) showed moderate correlation with stimulus awareness but could not adequately differentiate conscious access from its absence. By contrast, an intermediate response peaking at ∼240 ms was observed only for trials in which stimuli were consciously detected. That this signal was similar for all conditions in which awareness was reported is consistent with the hypothesis that conscious visual access is relatively sharply demarcated. PMID:23509248

  4. Novel Image Processing Interface to Relate DSB Spatial Distribution from Immunofluorescence Foci Experiments to the State-of-the-Art Models of DNA Breakage

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Cucinotta, F. A.

    2004-01-01

    A recently developed software (NASARadiationTrackImage) allows a quick and automatic segmentation of foci that indicate spatial localization of specific proteins that are visualized by immunofluorescence. Of interest are the spatial and temporal distribution of foci such as gammaH2AX, a signal of the phosphorylation of a variant of the histone H2A that has been shown to correspond to DSBs, or proteins involved in DSB processing, such as ATM, Rad51, and p53, following exposures of human cells to high charge and energy (HZE) ion irradiation. Experimental data are recorded as sets of two-dimensional images in color with cells and foci of gammaH2AX, ATM, Rad51 or others shown. Different cells, levels of radiation and timing after radiation were recorded. The software allows us to calculate the number of foci per cell, overall intensity of light in foci and their spatial organization. A simple statistical model allows for testing of foci overlap (eclipse). A more complex statistical model previously known as DNAbreak simulates track structure and random chromosome geometry. It has one adjustable parameter corresponding to an average intensity of DSB creation in cubic micrometers of DNA volume per particle track or unit dose. Its limitation is the low-resolution limit both in physical space and DSB's along DNA. It works adequately on the scale of a cell and provides further insights on how the geometry of tracks and DNA affects genomic damage of the cell and subsequent repair. Future developments of the model for the description of the time evolution of DNA damage response proteins, and more robust track structure models will be discussed.

  5. TrackMate: An open and extensible platform for single-particle tracking.

    PubMed

    Tinevez, Jean-Yves; Perry, Nick; Schindelin, Johannes; Hoopes, Genevieve M; Reynolds, Gregory D; Laplantine, Emmanuel; Bednarek, Sebastian Y; Shorte, Spencer L; Eliceiri, Kevin W

    2017-02-15

    We present TrackMate, an open source Fiji plugin for the automated, semi-automated, and manual tracking of single-particles. It offers a versatile and modular solution that works out of the box for end users, through a simple and intuitive user interface. It is also easily scriptable and adaptable, operating equally well on 1D over time, 2D over time, 3D over time, or other single and multi-channel image variants. TrackMate provides several visualization and analysis tools that aid in assessing the relevance of results. The utility of TrackMate is further enhanced through its ability to be readily customized to meet specific tracking problems. TrackMate is an extensible platform where developers can easily write their own detection, particle linking, visualization or analysis algorithms within the TrackMate environment. This evolving framework provides researchers with the opportunity to quickly develop and optimize new algorithms based on existing TrackMate modules without the need of having to write de novo user interfaces, including visualization, analysis and exporting tools. The current capabilities of TrackMate are presented in the context of three different biological problems. First, we perform Caenorhabditis-elegans lineage analysis to assess how light-induced damage during imaging impairs its early development. Our TrackMate-based lineage analysis indicates the lack of a cell-specific light-sensitive mechanism. Second, we investigate the recruitment of NEMO (NF-κB essential modulator) clusters in fibroblasts after stimulation by the cytokine IL-1 and show that photodamage can generate artifacts in the shape of TrackMate characterized movements that confuse motility analysis. Finally, we validate the use of TrackMate for quantitative lifetime analysis of clathrin-mediated endocytosis in plant cells. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  6. Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.

    PubMed

    Chen, Jian; Jia, Bingxi; Zhang, Kaixiang

    2017-11-01

    In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.

  7. Weighted feature selection criteria for visual servoing of a telerobot

    NASA Technical Reports Server (NTRS)

    Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.

    1989-01-01

    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.

  8. Learned filters for object detection in multi-object visual tracking

    NASA Astrophysics Data System (ADS)

    Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David

    2016-05-01

    We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.

  9. Integration of Off-Track Sonic Boom Analysis in Conceptual Design of Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Li, Wu

    2011-01-01

    A highly desired capability for the conceptual design of aircraft is the ability to rapidly and accurately evaluate new concepts to avoid adverse trade decisions that may hinder the development process in the later stages of design. Evaluating the robustness of new low-boom concepts is important for the conceptual design of supersonic aircraft. Here, robustness means that the aircraft configuration has a low-boom ground signature at both under- and off-track locations. An integrated process for off-track boom analysis is developed to facilitate the design of robust low-boom supersonic aircraft. The integrated off-track analysis can also be used to study the sonic boom impact and to plan future flight trajectories where flight conditions and ground elevation might have a significant effect on ground signatures. The key enabler for off-track sonic boom analysis is accurate computational fluid dynamics (CFD) solutions for off-body pressure distributions. To ensure the numerical accuracy of the off-body pressure distributions, a mesh study is performed with Cart3D to determine the mesh requirements for off- body CFD analysis and comparisons are made between the Cart3D and USM3D results. The variations in ground signatures that result from changes in the initial location of the near-field waveform are also examined. Finally, a complete under- and off-track sonic boom analysis is presented for two distinct supersonic concepts to demonstrate the capability of the integrated analysis process.

  10. Audio-visual interactions uniquely contribute to resolution of visual conflict in people possessing absolute pitch.

    PubMed

    Kim, Sujin; Blake, Randolph; Lee, Minyoung; Kim, Chai-Youn

    2017-01-01

    Individuals possessing absolute pitch (AP) are able to identify a given musical tone or to reproduce it without reference to another tone. The present study sought to learn whether this exceptional auditory ability impacts visual perception under stimulus conditions that provoke visual competition in the form of binocular rivalry. Nineteen adult participants with 3-19 years of musical training were divided into two groups according to their performance on a task involving identification of the specific note associated with hearing a given musical pitch. During test trials lasting just over half a minute, participants dichoptically viewed a scrolling musical score presented to one eye and a drifting sinusoidal grating presented to the other eye; throughout the trial they pressed buttons to track the alternations in visual awareness produced by these dissimilar monocular stimuli. On "pitch-congruent" trials, participants heard an auditory melody that was congruent in pitch with the visual score, on "pitch-incongruent" trials they heard a transposed auditory melody that was congruent with the score in melody but not in pitch, and on "melody-incongruent" trials they heard an auditory melody completely different from the visual score. For both groups, the visual musical scores predominated over the gratings when the auditory melody was congruent compared to when it was incongruent. Moreover, the AP participants experienced greater predominance of the visual score when it was accompanied by the pitch-congruent melody compared to the same melody transposed in pitch; for non-AP musicians, pitch-congruent and pitch-incongruent trials yielded equivalent predominance. Analysis of individual durations of dominance revealed differential effects on dominance and suppression durations for AP and non-AP participants. These results reveal that AP is accompanied by a robust form of bisensory interaction between tonal frequencies and musical notation that boosts the salience of a visual score.

  11. Audio-visual interactions uniquely contribute to resolution of visual conflict in people possessing absolute pitch

    PubMed Central

    Kim, Sujin; Blake, Randolph; Lee, Minyoung; Kim, Chai-Youn

    2017-01-01

    Individuals possessing absolute pitch (AP) are able to identify a given musical tone or to reproduce it without reference to another tone. The present study sought to learn whether this exceptional auditory ability impacts visual perception under stimulus conditions that provoke visual competition in the form of binocular rivalry. Nineteen adult participants with 3–19 years of musical training were divided into two groups according to their performance on a task involving identification of the specific note associated with hearing a given musical pitch. During test trials lasting just over half a minute, participants dichoptically viewed a scrolling musical score presented to one eye and a drifting sinusoidal grating presented to the other eye; throughout the trial they pressed buttons to track the alternations in visual awareness produced by these dissimilar monocular stimuli. On “pitch-congruent” trials, participants heard an auditory melody that was congruent in pitch with the visual score, on “pitch-incongruent” trials they heard a transposed auditory melody that was congruent with the score in melody but not in pitch, and on “melody-incongruent” trials they heard an auditory melody completely different from the visual score. For both groups, the visual musical scores predominated over the gratings when the auditory melody was congruent compared to when it was incongruent. Moreover, the AP participants experienced greater predominance of the visual score when it was accompanied by the pitch-congruent melody compared to the same melody transposed in pitch; for non-AP musicians, pitch-congruent and pitch-incongruent trials yielded equivalent predominance. Analysis of individual durations of dominance revealed differential effects on dominance and suppression durations for AP and non-AP participants. These results reveal that AP is accompanied by a robust form of bisensory interaction between tonal frequencies and musical notation that boosts the salience of a visual score. PMID:28380058

  12. Robust augmented reality registration method for localization of solid organs' tumors using CT-derived virtual biomechanical model and fluorescent fiducials.

    PubMed

    Kong, Seong-Ho; Haouchine, Nazim; Soares, Renato; Klymchenko, Andrey; Andreiuk, Bohdan; Marques, Bruno; Shabat, Galyna; Piechaud, Thierry; Diana, Michele; Cotin, Stéphane; Marescaux, Jacques

    2017-07-01

    Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.

  13. Tracker Toolkit

    NASA Technical Reports Server (NTRS)

    Lewis, Steven J.; Palacios, David M.

    2013-01-01

    This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).

  14. Seismic noise attenuation using an online subspace tracking algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  15. Optimized swimmer tracking system based on a novel multi-related-targets approach

    NASA Astrophysics Data System (ADS)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2017-02-01

    Robust tracking is a crucial step in automatic swimmer evaluation from video sequences. We designed a robust swimmer tracking system using a new multi-related-targets approach. The main idea is to consider the swimmer as a bloc of connected subtargets that advance at the same speed. If one of the subtargets is partially or totally occluded, it can be localized by knowing the position of the others. In this paper, we first introduce the two-dimensional direct linear transformation technique that we used to calibrate the videos. Then, we present the classical tracking approach based on dynamic fusion. Next, we highlight the main contribution of our work, which is the multi-related-targets tracking approach. This approach, the classical head-only approach and the ground truth are then compared, through testing on a database of high-level swimmers in training, national and international competitions (French National Championships, Limoges 2015, and World Championships, Kazan 2015). Tracking percentage and the accuracy of the instantaneous speed are evaluated and the findings show that our new appraoach is significantly more accurate than the classical approach.

  16. Robust back-stepping output feedback trajectory tracking for quadrotors via extended state observer and sigmoid tracking differentiator

    NASA Astrophysics Data System (ADS)

    Shao, Xingling; Liu, Jun; Wang, Honglun

    2018-05-01

    In this paper, a robust back-stepping output feedback trajectory tracking controller is proposed for quadrotors subject to parametric uncertainties and external disturbances. Based on the hierarchical control principle, the quadrotor dynamics is decomposed into translational and rotational subsystems to facilitate the back-stepping control design. With given model information incorporated into observer design, a high-order extended state observer (ESO) that relies only on position measurements is developed to estimate the remaining unmeasurable states and the lumped disturbances in rotational subsystem simultaneously. To overcome the problem of "explosion of complexity" in the back-stepping design, the sigmoid tracking differentiator (STD) is introduced to compute the derivative of virtual control laws. The advantage is that the proposed controller via output-feedback scheme not only can ensure good tracking performance using very limited information of quadrotors, but also has the ability of handling the undesired uncertainties. The stability analysis is established using the Lyapunov theory. Simulation results demonstrate the effectiveness of the proposed control scheme in achieving a guaranteed tracking performance with respect to an 8-shaped reference trajectory.

  17. Comparison of Predictable Smooth Ocular and Combined Eye-Head Tracking Behaviour in Patients with Lesions Affecting the Brainstem and Cerebellum

    NASA Technical Reports Server (NTRS)

    Grant, Michael P.; Leigh, R. John; Seidman, Scott H.; Riley, David E.; Hanna, Joseph P.

    1992-01-01

    We compared the ability of eight normal subjects and 15 patients with brainstem or cerebellar disease to follow a moving visual stimulus smoothly with either the eyes alone or with combined eye-head tracking. The visual stimulus was either a laser spot (horizontal and vertical planes) or a large rotating disc (torsional plane), which moved at one sinusoidal frequency for each subject. The visually enhanced Vestibulo-Ocular Reflex (VOR) was also measured in each plane. In the horizontal and vertical planes, we found that if tracking gain (gaze velocity/target velocity) for smooth pursuit was close to 1, the gain of combined eye-hand tracking was similar. If the tracking gain during smooth pursuit was less than about 0.7, combined eye-head tracking was usually superior. Most patients, irrespective of diagnosis, showed combined eye-head tracking that was superior to smooth pursuit; only two patients showed the converse. In the torsional plane, in which optokinetic responses were weak, combined eye-head tracking was much superior, and this was the case in both subjects and patients. We found that a linear model, in which an internal ocular tracking signal cancelled the VOR, could account for our findings in most normal subjects in the horizontal and vertical planes, but not in the torsional plane. The model failed to account for tracking behaviour in most patients in any plane, and suggested that the brain may use additional mechanisms to reduce the internal gain of the VOR during combined eye-head tracking. Our results confirm that certain patients who show impairment of smooth-pursuit eye movements preserve their ability to smoothly track a moving target with combined eye-head tracking.

  18. Kernelized correlation tracking with long-term motion cues

    NASA Astrophysics Data System (ADS)

    Lv, Yunqiu; Liu, Kai; Cheng, Fei

    2018-04-01

    Robust object tracking is a challenging task in computer vision due to interruptions such as deformation, fast motion and especially, occlusion of tracked object. When occlusions occur, image data will be unreliable and is insufficient for the tracker to depict the object of interest. Therefore, most trackers are prone to fail under occlusion. In this paper, an occlusion judgement and handling method based on segmentation of the target is proposed. If the target is occluded, the speed and direction of it must be different from the objects occluding it. Hence, the value of motion features are emphasized. Considering the efficiency and robustness of Kernelized Correlation Filter Tracking (KCF), it is adopted as a pre-tracker to obtain a predicted position of the target. By analyzing long-term motion cues of objects around this position, the tracked object is labelled. Hence, occlusion could be detected easily. Experimental results suggest that our tracker achieves a favorable performance and effectively handles occlusion and drifting problems.

  19. Robust feedback zoom tracking for digital video surveillance.

    PubMed

    Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong

    2012-01-01

    Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance.

  20. Robust model-based analysis of single-particle tracking experiments with Spot-On

    PubMed Central

    Grimm, Jonathan B; Lavis, Luke D

    2018-01-01

    Single-particle tracking (SPT) has become an important method to bridge biochemistry and cell biology since it allows direct observation of protein binding and diffusion dynamics in live cells. However, accurately inferring information from SPT studies is challenging due to biases in both data analysis and experimental design. To address analysis bias, we introduce ‘Spot-On’, an intuitive web-interface. Spot-On implements a kinetic modeling framework that accounts for known biases, including molecules moving out-of-focus, and robustly infers diffusion constants and subpopulations from pooled single-molecule trajectories. To minimize inherent experimental biases, we implement and validate stroboscopic photo-activation SPT (spaSPT), which minimizes motion-blur bias and tracking errors. We validate Spot-On using experimentally realistic simulations and show that Spot-On outperforms other methods. We then apply Spot-On to spaSPT data from live mammalian cells spanning a wide range of nuclear dynamics and demonstrate that Spot-On consistently and robustly infers subpopulation fractions and diffusion constants. PMID:29300163

  1. Robust model-based analysis of single-particle tracking experiments with Spot-On.

    PubMed

    Hansen, Anders S; Woringer, Maxime; Grimm, Jonathan B; Lavis, Luke D; Tjian, Robert; Darzacq, Xavier

    2018-01-04

    Single-particle tracking (SPT) has become an important method to bridge biochemistry and cell biology since it allows direct observation of protein binding and diffusion dynamics in live cells. However, accurately inferring information from SPT studies is challenging due to biases in both data analysis and experimental design. To address analysis bias, we introduce 'Spot-On', an intuitive web-interface. Spot-On implements a kinetic modeling framework that accounts for known biases, including molecules moving out-of-focus, and robustly infers diffusion constants and subpopulations from pooled single-molecule trajectories. To minimize inherent experimental biases, we implement and validate stroboscopic photo-activation SPT (spaSPT), which minimizes motion-blur bias and tracking errors. We validate Spot-On using experimentally realistic simulations and show that Spot-On outperforms other methods. We then apply Spot-On to spaSPT data from live mammalian cells spanning a wide range of nuclear dynamics and demonstrate that Spot-On consistently and robustly infers subpopulation fractions and diffusion constants. © 2018, Hansen et al.

  2. Attentional Resources in Visual Tracking through Occlusion: The High-Beams Effect

    ERIC Educational Resources Information Center

    Flombaum, Jonathan I.; Scholl, Brian J.; Pylyshyn, Zenon W.

    2008-01-01

    A considerable amount of research has uncovered heuristics that the visual system employs to keep track of objects through periods of occlusion. Relatively little work, by comparison, has investigated the online resources that support this processing. We explored how attention is distributed when featurally identical objects become occluded during…

  3. Combined Atropine and 2-PAM Cl Effects on Tracking Performance and Visual, Physiological, and Psychological Functions

    DTIC Science & Technology

    1988-12-01

    tracking task reveals the magnitude Akitrihm. Spare. and Environmental Medicine • December. I$ II I ANTIDOTE EFFECTS--PEN ETAR ET AL. and duration of the... marihuana on dynamic visual acu- blood pressure following the combination of 2-PAM Cl ity: I. Threshold measurements. Perception Psychophys. 1975

  4. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    NASA Astrophysics Data System (ADS)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  5. Automation trust and attention allocation in multitasking workspace.

    PubMed

    Karpinsky, Nicole D; Chancey, Eric T; Palmer, Dakota B; Yamani, Yusuke

    2018-07-01

    Previous research suggests that operators with high workload can distrust and then poorly monitor automation, which has been generally inferred from automation dependence behaviors. To test automation monitoring more directly, the current study measured operators' visual attention allocation, workload, and trust toward imperfect automation in a dynamic multitasking environment. Participants concurrently performed a manual tracking task with two levels of difficulty and a system monitoring task assisted by an unreliable signaling system. Eye movement data indicate that operators allocate less visual attention to monitor automation when the tracking task is more difficult. Participants reported reduced levels of trust toward the signaling system when the tracking task demanded more focused visual attention. Analyses revealed that trust mediated the relationship between the load of the tracking task and attention allocation in Experiment 1, an effect that was not replicated in Experiment 2. Results imply a complex process underlying task load, visual attention allocation, and automation trust during multitasking. Automation designers should consider operators' task load in multitasking workspaces to avoid reduced automation monitoring and distrust toward imperfect signaling systems. Copyright © 2018. Published by Elsevier Ltd.

  6. Sliding Mode Control of the X-33 with an Engine Failure

    NASA Technical Reports Server (NTRS)

    Shtessel, Yuri B.; Hall, Charles E.

    2000-01-01

    Ascent flight control of the X-3 is performed using two XRS-2200 linear aerospike engines. in addition to aerosurfaces. The baseline control algorithms are PID with gain scheduling. Flight control using an innovative method. Sliding Mode Control. is presented for nominal and engine failed modes of flight. An easy to implement, robust controller. requiring no reconfiguration or gain scheduling is demonstrated through high fidelity flight simulations. The proposed sliding mode controller utilizes a two-loop structure and provides robust. de-coupled tracking of both orientation angle command profiles and angular rate command profiles in the presence of engine failure, bounded external disturbances (wind gusts) and uncertain matrix of inertia. Sliding mode control causes the angular rate and orientation angle tracking error dynamics to be constrained to linear, de-coupled, homogeneous, and vector valued differential equations with desired eigenvalues. Conditions that restrict engine failures to robustness domain of the sliding mode controller are derived. Overall stability of a two-loop flight control system is assessed. Simulation results show that the designed controller provides robust, accurate, de-coupled tracking of the orientation angle command profiles in the presence of external disturbances and vehicle inertia uncertainties, as well as the single engine failed case. The designed robust controller will significantly reduce the time and cost associated with flying new trajectory profiles or orbits, with new payloads, and with modified vehicles

  7. Measurement of electromagnetic tracking error in a navigated breast surgery setup

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor

    2016-03-01

    PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.

  8. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  9. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  10. Robust cell tracking in epithelial tissues through identification of maximum common subgraphs.

    PubMed

    Kursawe, Jochen; Bardenet, Rémi; Zartman, Jeremiah J; Baker, Ruth E; Fletcher, Alexander G

    2016-11-01

    Tracking of cells in live-imaging microscopy videos of epithelial sheets is a powerful tool for investigating fundamental processes in embryonic development. Characterizing cell growth, proliferation, intercalation and apoptosis in epithelia helps us to understand how morphogenetic processes such as tissue invagination and extension are locally regulated and controlled. Accurate cell tracking requires correctly resolving cells entering or leaving the field of view between frames, cell neighbour exchanges, cell removals and cell divisions. However, current tracking methods for epithelial sheets are not robust to large morphogenetic deformations and require significant manual interventions. Here, we present a novel algorithm for epithelial cell tracking, exploiting the graph-theoretic concept of a 'maximum common subgraph' to track cells between frames of a video. Our algorithm does not require the adjustment of tissue-specific parameters, and scales in sub-quadratic time with tissue size. It does not rely on precise positional information, permitting large cell movements between frames and enabling tracking in datasets acquired at low temporal resolution due to experimental constraints such as phototoxicity. To demonstrate the method, we perform tracking on the Drosophila embryonic epidermis and compare cell-cell rearrangements to previous studies in other tissues. Our implementation is open source and generally applicable to epithelial tissues. © 2016 The Authors.

  11. Robust cell tracking in epithelial tissues through identification of maximum common subgraphs

    PubMed Central

    Bardenet, Rémi; Zartman, Jeremiah J.; Baker, Ruth E.

    2016-01-01

    Tracking of cells in live-imaging microscopy videos of epithelial sheets is a powerful tool for investigating fundamental processes in embryonic development. Characterizing cell growth, proliferation, intercalation and apoptosis in epithelia helps us to understand how morphogenetic processes such as tissue invagination and extension are locally regulated and controlled. Accurate cell tracking requires correctly resolving cells entering or leaving the field of view between frames, cell neighbour exchanges, cell removals and cell divisions. However, current tracking methods for epithelial sheets are not robust to large morphogenetic deformations and require significant manual interventions. Here, we present a novel algorithm for epithelial cell tracking, exploiting the graph-theoretic concept of a ‘maximum common subgraph’ to track cells between frames of a video. Our algorithm does not require the adjustment of tissue-specific parameters, and scales in sub-quadratic time with tissue size. It does not rely on precise positional information, permitting large cell movements between frames and enabling tracking in datasets acquired at low temporal resolution due to experimental constraints such as phototoxicity. To demonstrate the method, we perform tracking on the Drosophila embryonic epidermis and compare cell–cell rearrangements to previous studies in other tissues. Our implementation is open source and generally applicable to epithelial tissues. PMID:28334699

  12. Computational Methods for Tracking, Quantitative Assessment, and Visualization of C. elegans Locomotory Behavior

    PubMed Central

    Moy, Kyle; Li, Weiyu; Tran, Huu Phuoc; Simonis, Valerie; Story, Evan; Brandon, Christopher; Furst, Jacob; Raicu, Daniela; Kim, Hongkyun

    2015-01-01

    The nematode Caenorhabditis elegans provides a unique opportunity to interrogate the neural basis of behavior at single neuron resolution. In C. elegans, neural circuits that control behaviors can be formulated based on its complete neural connection map, and easily assessed by applying advanced genetic tools that allow for modulation in the activity of specific neurons. Importantly, C. elegans exhibits several elaborate behaviors that can be empirically quantified and analyzed, thus providing a means to assess the contribution of specific neural circuits to behavioral output. Particularly, locomotory behavior can be recorded and analyzed with computational and mathematical tools. Here, we describe a robust single worm-tracking system, which is based on the open-source Python programming language, and an analysis system, which implements path-related algorithms. Our tracking system was designed to accommodate worms that explore a large area with frequent turns and reversals at high speeds. As a proof of principle, we used our tracker to record the movements of wild-type animals that were freshly removed from abundant bacterial food, and determined how wild-type animals change locomotory behavior over a long period of time. Consistent with previous findings, we observed that wild-type animals show a transition from area-restricted local search to global search over time. Intriguingly, we found that wild-type animals initially exhibit short, random movements interrupted by infrequent long trajectories. This movement pattern often coincides with local/global search behavior, and visually resembles Lévy flight search, a search behavior conserved across species. Our mathematical analysis showed that while most of the animals exhibited Brownian walks, approximately 20% of the animals exhibited Lévy flights, indicating that C. elegans can use Lévy flights for efficient food search. In summary, our tracker and analysis software will help analyze the neural basis of the alteration and transition of C. elegans locomotory behavior in a food-deprived condition. PMID:26713869

  13. Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments

    NASA Technical Reports Server (NTRS)

    Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi

    1994-01-01

    Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.

  14. The first rapid assessment of avoidable blindness (RAAB) in Thailand.

    PubMed

    Isipradit, Saichin; Sirimaharaj, Maytinee; Charukamnoetkanok, Puwat; Thonginnetra, Oraorn; Wongsawad, Warapat; Sathornsumetee, Busaba; Somboonthanakij, Sudawadee; Soomsawasdi, Piriya; Jitawatanarat, Umapond; Taweebanjongsin, Wongsiri; Arayangkoon, Eakkachai; Arame, Punyawee; Kobkoonthon, Chinsuchee; Pangputhipong, Pannet

    2014-01-01

    The majority of vision loss is preventable or treatable. Population surveys are crucial for planning, implementation, and monitoring policies and interventions to eliminate avoidable blindness and visual impairments. This is the first rapid assessment of avoidable blindness (RAAB) study in Thailand. A cross-sectional study of a population in Thailand age 50 years old or over aimed to assess the prevalence and causes of blindness and visual impairments. Using the Thailand National Census 2010 as the sampling frame, a stratified four-stage cluster sampling based on a probability proportional to size was conducted in 176 enumeration areas from 11 provinces. Participants received comprehensive eye examination by ophthalmologists. The age and sex adjusted prevalence of blindness (presenting visual acuity (VA) <20/400), severe visual impairment (VA <20/200 but ≥20/400), and moderate visual impairment (VA <20/70 but ≥20/200) were 0.6% (95% CI: 0.5-0.8), 1.3% (95% CI: 1.0-1.6), 12.6% (95% CI: 10.8-14.5). There was no significant difference among the four regions of Thailand. Cataract was the main cause of vision loss accounted for 69.7% of blindness. Cataract surgical coverage in persons was 95.1% for cut off VA of 20/400. Refractive errors, diabetic retinopathy, glaucoma, and corneal opacities were responsible for 6.0%, 5.1%, 4.0%, and 2.0% of blindness respectively. Thailand is on track to achieve the goal of VISION 2020. However, there is still much room for improvement. Policy refinements and innovative interventions are recommended to alleviate blindness and visual impairments especially regarding the backlog of blinding cataract, management of non-communicative, chronic, age-related eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy, prevention of childhood blindness, and establishment of a robust eye health information system.

  15. Autonomous Visual Navigation of an Indoor Environment Using a Parsimonious, Insect Inspired Familiarity Algorithm

    PubMed Central

    Brayfield, Brad P.

    2016-01-01

    The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery. PMID:27119720

  16. Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking

    PubMed Central

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods. PMID:23843739

  17. Memory-based multiagent coevolution modeling for robust moving object tracking.

    PubMed

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.

  18. Accurate object tracking system by integrating texture and depth cues

    NASA Astrophysics Data System (ADS)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  19. The role of vision in odor-plume tracking by walking and flying insects.

    PubMed

    Willis, Mark A; Avondet, Jennifer L; Zheng, Elizabeth

    2011-12-15

    The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available.

  20. The role of vision in odor-plume tracking by walking and flying insects

    PubMed Central

    Willis, Mark A.; Avondet, Jennifer L.; Zheng, Elizabeth

    2011-01-01

    SUMMARY The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available. PMID:22116754

  1. Delineating the Neural Signatures of Tracking Spatial Position and Working Memory during Attentive Tracking

    PubMed Central

    Drew, Trafton; Horowitz, Todd S.; Wolfe, Jeremy M.; Vogel, Edward K.

    2015-01-01

    In the attentive tracking task, observers track multiple objects as they move independently and unpredictably among visually identical distractors. Although a number of models of attentive tracking implicate visual working memory as the mechanism responsible for representing target locations, no study has ever directly compared the neural mechanisms of the two tasks. In the current set of experiments, we used electrophysiological recordings to delineate similarities and differences between the neural processing involved in working memory and attentive tracking. We found that the contralateral electrophysiological response to the two tasks was similarly sensitive to the number of items attended in both tasks but that there was also a unique contralateral negativity related to the process of monitoring target position during tracking. This signal was absent for periods of time during tracking tasks when objects briefly stopped moving. These results provide evidence that, during attentive tracking, the process of tracking target locations elicits an electrophysiological response that is distinct and dissociable from neural measures of the number of items being attended. PMID:21228175

  2. A Fast MEANSHIFT Algorithm-Based Target Tracking System

    PubMed Central

    Sun, Jian

    2012-01-01

    Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397

  3. Semi-Supervised Tensor-Based Graph Embedding Learning and Its Application to Visual Discriminant Tracking.

    PubMed

    Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen

    2017-01-01

    An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.

  4. Magneto-optical nanoparticles for cyclic magnetomotive photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Arnal, Bastien; Yoon, Soon Joon; Li, Junwei; Gao, Xiaohu; O'Donnell, Matthew

    2018-05-01

    Photoacoustic imaging is a highly promising tool to visualize molecular events with deep tissue penetration. Like most other modalities, however, image contrast under in vivo conditions is far from optimal due to background signals from tissue. Using iron oxide-gold core-shell nanoparticles, we previously demonstrated that magnetomotive photoacoustic (mmPA) imaging can dramatically reduce the influence of background signals and produce high-contrast molecular images. Here we report two significant advances toward clinical translation of this technology. First, we introduce a new class of compact, uniform, magneto-optically coupled core-shell nanoparticle, prepared through localized copolymerization of polypyrrole (PPy) on an iron oxide nanoparticle surface. The resulting iron oxide-PPy nanoparticles solve the photo-instability and small-scale synthesis problems previously encountered by the gold coating approach, and extend the large optical absorption coefficient of the particles beyond 1000 nm in wavelength. In parallel, we have developed a new generation of mmPA imaging featuring cyclic magnetic motion and ultrasound speckle tracking, with an image capture frame rate several hundred times faster than the photoacoustic speckle tracking method demonstrated previously. These advances enable robust artifact elimination caused by physiologic motion and first application of the mmPA technology in vivo for sensitive tumor imaging.

  5. An Autonomous Gps-Denied Unmanned Vehicle Platform Based on Binocular Vision for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.

    2018-04-01

    Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  6. Real-time visualization and analysis of airflow field by use of digital holography

    NASA Astrophysics Data System (ADS)

    Di, Jianglei; Wu, Bingjing; Chen, Xin; Liu, Junjiang; Wang, Jun; Zhao, Jianlin

    2013-04-01

    The measurement and analysis of airflow field is very important in fluid dynamics. For airflow, smoke particles can be added to visually observe the turbulence phenomena by particle tracking technology, but the effect of smoke particles to follow the high speed airflow will reduce the measurement accuracy. In recent years, with the advantage of non-contact, nondestructive, fast and full-field measurement, digital holography has been widely applied in many fields, such as deformation and vibration analysis, particle characterization, refractive index measurement, and so on. In this paper, we present a method to measure the airflow field by use of digital holography. A small wind tunnel model made of acrylic glass is built to control the velocity and direction of airflow. Different shapes of samples such as aircraft wing and cylinder are placed in the wind tunnel model to produce different forms of flow field. With a Mach-Zehnder interferometer setup, a series of digital holograms carrying the information of airflow filed distributions in different states are recorded by CCD camera and corresponding holographic images are numerically reconstructed from the holograms by computer. Then we can conveniently obtain the velocity or pressure information of the airflow deduced from the quantitative phase information of holographic images and visually display the airflow filed and its evolution in the form of a movie. The theory and experiment results show that digital holography is a robust and feasible approach for real-time visualization and analysis of airflow field.

  7. Attentive Tracking Disrupts Feature Binding in Visual Working Memory

    PubMed Central

    Fougnie, Daryl; Marois, René

    2009-01-01

    One of the most influential theories in visual cognition proposes that attention is necessary to bind different visual features into coherent object percepts (Treisman & Gelade, 1980). While considerable evidence supports a role for attention in perceptual feature binding, whether attention plays a similar function in visual working memory (VWM) remains controversial. To test the attentional requirements of VWM feature binding, here we gave participants an attention-demanding multiple object tracking task during the retention interval of a VWM task. Results show that the tracking task disrupted memory for color-shape conjunctions above and beyond any impairment to working memory for object features, and that this impairment was larger when the VWM stimuli were presented at different spatial locations. These results demonstrate that the role of visuospatial attention in feature binding is not unique to perception, but extends to the working memory of these perceptual representations as well. PMID:19609460

  8. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  9. A robust H∞-tracking design for uncertain Takagi-Sugeno fuzzy systems with unknown premise variables using descriptor redundancy approach

    NASA Astrophysics Data System (ADS)

    Hassan Asemani, Mohammad; Johari Majd, Vahid

    2015-12-01

    This paper addresses a robust H∞ fuzzy observer-based tracking design problem for uncertain Takagi-Sugeno fuzzy systems with external disturbances. To have a practical observer-based controller, the premise variables of the system are assumed to be not measurable in general, which leads to a more complex design process. The tracker is synthesised based on a fuzzy Lyapunov function approach and non-parallel distributed compensation (non-PDC) scheme. Using the descriptor redundancy approach, the robust stability conditions are derived in the form of strict linear matrix inequalities (LMIs) even in the presence of uncertainties in the system, input, and output matrices simultaneously. Numerical simulations are provided to show the effectiveness of the proposed method.

  10. Penetrating transmission zeros in the design of robust servomechanism systems

    NASA Technical Reports Server (NTRS)

    Wang, S. H.; Davison, E. J.

    1981-01-01

    In the design of a robust servomechanism system, it is well known that the system cannot track a reference signal whose frequency coincides with the transmission zeros of the system. This paper proposes a new design method for overcoming this difficulty. The controller to be used employs a sampler and holding device with exponential decay. It is shown that the transmission zeros of the discretized system can be shifted by changing the rate of the exponential decay of the holding device. Thus, it is possible to design a robust controller for the discretized system to track any reference signal of given frequency, even if the given frequency coincides with the transmission zeros of the original continuous-time system.

  11. Automatic tracking of cells for video microscopy in patch clamp experiments

    PubMed Central

    2014-01-01

    Background Visualisation of neurons labeled with fluorescent proteins or compounds generally require exposure to intense light for a relatively long period of time, often leading to bleaching of the fluorescent probe and photodamage of the tissue. Here we created a technique to drastically shorten light exposure and improve the targeting of fluorescent labeled cells that is specially useful for patch-clamp recordings. We applied image tracking and mask overlay to reduce the time of fluorescence exposure and minimise mistakes when identifying neurons. Methods Neurons are first identified according to visual criteria (e.g. fluorescence protein expression, shape, viability etc.) and a transmission microscopy image Differential Interference Contrast (DIC) or Dodt contrast containing the cell used as a reference for the tracking algorithm. A fluorescence image can also be acquired later to be used as a mask (that can be overlaid on the target during live transmission video). As patch-clamp experiments require translating the microscope stage, we used pattern matching to track reference neurons in order to move the fluorescence mask to match the new position of the objective in relation to the sample. For the image processing we used the Open Source Computer Vision (OpenCV) library, including the Speeded-Up Robust Features (SURF) for tracking cells. The dataset of images (n = 720) was analyzed under normal conditions of acquisition and with influence of noise (defocusing and brightness). Results We validated the method in dissociated neuronal cultures and fresh brain slices expressing Enhanced Yellow Fluorescent Protein (eYFP) or Tandem Dimer Tomato (tdTomato) proteins, which considerably decreased the exposure to fluorescence excitation, thereby minimising photodamage. We also show that the neuron tracking can be used in differential interference contrast or Dodt contrast microscopy. Conclusion The techniques of digital image processing used in this work are an important addition to the set of microscopy tools used in modern electrophysiology, specially in experiments with neuron cultures and brain slices. PMID:24946774

  12. Automatic tracking of cells for video microscopy in patch clamp experiments.

    PubMed

    Peixoto, Helton M; Munguba, Hermany; Cruz, Rossana M S; Guerreiro, Ana M G; Leao, Richardson N

    2014-06-20

    Visualisation of neurons labeled with fluorescent proteins or compounds generally require exposure to intense light for a relatively long period of time, often leading to bleaching of the fluorescent probe and photodamage of the tissue. Here we created a technique to drastically shorten light exposure and improve the targeting of fluorescent labeled cells that is specially useful for patch-clamp recordings. We applied image tracking and mask overlay to reduce the time of fluorescence exposure and minimise mistakes when identifying neurons. Neurons are first identified according to visual criteria (e.g. fluorescence protein expression, shape, viability etc.) and a transmission microscopy image Differential Interference Contrast (DIC) or Dodt contrast containing the cell used as a reference for the tracking algorithm. A fluorescence image can also be acquired later to be used as a mask (that can be overlaid on the target during live transmission video). As patch-clamp experiments require translating the microscope stage, we used pattern matching to track reference neurons in order to move the fluorescence mask to match the new position of the objective in relation to the sample. For the image processing we used the Open Source Computer Vision (OpenCV) library, including the Speeded-Up Robust Features (SURF) for tracking cells. The dataset of images (n = 720) was analyzed under normal conditions of acquisition and with influence of noise (defocusing and brightness). We validated the method in dissociated neuronal cultures and fresh brain slices expressing Enhanced Yellow Fluorescent Protein (eYFP) or Tandem Dimer Tomato (tdTomato) proteins, which considerably decreased the exposure to fluorescence excitation, thereby minimising photodamage. We also show that the neuron tracking can be used in differential interference contrast or Dodt contrast microscopy. The techniques of digital image processing used in this work are an important addition to the set of microscopy tools used in modern electrophysiology, specially in experiments with neuron cultures and brain slices.

  13. Visual Processing of Faces in Individuals with Fragile X Syndrome: An Eye Tracking Study

    ERIC Educational Resources Information Center

    Farzin, Faraz; Rivera, Susan M.; Hessl, David

    2009-01-01

    Gaze avoidance is a hallmark behavioral feature of fragile X syndrome (FXS), but little is known about whether abnormalities in the visual processing of faces, including disrupted autonomic reactivity, may underlie this behavior. Eye tracking was used to record fixations and pupil diameter while adolescents and young adults with FXS and sex- and…

  14. The Influences of Static and Interactive Dynamic Facial Stimuli on Visual Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…

  15. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  16. Visual speech influences speech perception immediately but not automatically.

    PubMed

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  17. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  18. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  19. A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements.

    PubMed

    Hauswald, Anne; Lithari, Chrysa; Collignon, Olivier; Leonardelli, Elisa; Weisz, Nathan

    2018-05-07

    Successful lip-reading requires a mapping from visual to phonological information [1]. Recently, visual and motor cortices have been implicated in tracking lip movements (e.g., [2]). It remains unclear, however, whether visuo-phonological mapping occurs already at the level of the visual cortex-that is, whether this structure tracks the acoustic signal in a functionally relevant manner. To elucidate this, we investigated how the cortex tracks (i.e., entrains to) absent acoustic speech signals carried by silent lip movements. Crucially, we contrasted the entrainment to unheard forward (intelligible) and backward (unintelligible) acoustic speech. We observed that the visual cortex exhibited stronger entrainment to the unheard forward acoustic speech envelope compared to the unheard backward acoustic speech envelope. Supporting the notion of a visuo-phonological mapping process, this forward-backward difference of occipital entrainment was not present for actually observed lip movements. Importantly, the respective occipital region received more top-down input, especially from left premotor, primary motor, and somatosensory regions and, to a lesser extent, also from posterior temporal cortex. Strikingly, across participants, the extent of top-down modulation of the visual cortex stemming from these regions partially correlated with the strength of entrainment to absent acoustic forward speech envelope, but not to present forward lip movements. Our findings demonstrate that a distributed cortical network, including key dorsal stream auditory regions [3-5], influences how the visual cortex shows sensitivity to the intelligibility of speech while tracking silent lip movements. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. The Role of Visual Working Memory in Attentive Tracking of Unique Objects

    ERIC Educational Resources Information Center

    Makovski, Tal; Jiang, Yuhong V.

    2009-01-01

    When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…

  1. Adaptive robust motion trajectory tracking control of pneumatic cylinders with LuGre model-based friction compensation

    NASA Astrophysics Data System (ADS)

    Meng, Deyuan; Tao, Guoliang; Liu, Hao; Zhu, Xiaocong

    2014-07-01

    Friction compensation is particularly important for motion trajectory tracking control of pneumatic cylinders at low speed movement. However, most of the existing model-based friction compensation schemes use simple classical models, which are not enough to address applications with high-accuracy position requirements. Furthermore, the friction force in the cylinder is time-varying, and there exist rather severe unmodelled dynamics and unknown disturbances in the pneumatic system. To deal with these problems effectively, an adaptive robust controller with LuGre model-based dynamic friction compensation is constructed. The proposed controller employs on-line recursive least squares estimation (RLSE) to reduce the extent of parametric uncertainties, and utilizes the sliding mode control method to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. In addition, in order to realize LuGre model-based friction compensation, the modified dual-observer structure for estimating immeasurable friction internal state is developed. Therefore, a prescribed motion tracking transient performance and final tracking accuracy can be guaranteed. Since the system model uncertainties are unmatched, the recursive backstepping design technology is applied. In order to solve the conflicts between the sliding mode control design and the adaptive control design, the projection mapping is used to condition the RLSE algorithm so that the parameter estimates are kept within a known bounded convex set. Finally, the proposed controller is tested for tracking sinusoidal trajectories and smooth square trajectory under different loads and sudden disturbance. The testing results demonstrate that the achievable performance of the proposed controller is excellent and is much better than most other studies in literature. Especially when a 0.5 Hz sinusoidal trajectory is tracked, the maximum tracking error is 0.96 mm and the average tracking error is 0.45 mm. This paper constructs an adaptive robust controller which can compensate the friction force in the cylinder.

  2. An efficient sequential approach to tracking multiple objects through crowds for real-time intelligent CCTV systems.

    PubMed

    Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi

    2008-10-01

    Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.

  3. Nanoscale measurements of proton tracks using fluorescent nuclear track detectors

    PubMed Central

    Sawakuchi, Gabriel O.; Ferreira, Felisberto A.; McFadden, Conor H.; Hallacy, Timothy M.; Granville, Dal A.; Sahoo, Narayan; Akselrod, Mark S.

    2016-01-01

    Purpose: The authors describe a method in which fluorescence nuclear track detectors (FNTDs), novel track detectors with nanoscale spatial resolution, are used to determine the linear energy transfer (LET) of individual proton tracks from proton therapy beams by allowing visualization and 3D reconstruction of such tracks. Methods: FNTDs were exposed to proton therapy beams with nominal energies ranging from 100 to 250 MeV. Proton track images were then recorded by confocal microscopy of the FNTDs. Proton tracks in the FNTD images were fit by using a Gaussian function to extract fluorescence amplitudes. Histograms of fluorescence amplitudes were then compared with LET spectra. Results: The authors successfully used FNTDs to register individual proton tracks from high-energy proton therapy beams, allowing reconstruction of 3D images of proton tracks along with delta rays. The track amplitudes from FNTDs could be used to parameterize LET spectra, allowing the LET of individual proton tracks from therapeutic proton beams to be determined. Conclusions: FNTDs can be used to directly visualize proton tracks and their delta rays at the nanoscale level. Because the track intensities in the FNTDs correlate with LET, they could be used further to measure LET of individual proton tracks. This method may be useful for measuring nanoscale radiation quantities and for measuring the LET of individual proton tracks in radiation biology experiments. PMID:27147359

  4. Nanoscale measurements of proton tracks using fluorescent nuclear track detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawakuchi, Gabriel O., E-mail: gsawakuchi@mdanderson.org; Sahoo, Narayan; Ferreira, Felisberto A.

    Purpose: The authors describe a method in which fluorescence nuclear track detectors (FNTDs), novel track detectors with nanoscale spatial resolution, are used to determine the linear energy transfer (LET) of individual proton tracks from proton therapy beams by allowing visualization and 3D reconstruction of such tracks. Methods: FNTDs were exposed to proton therapy beams with nominal energies ranging from 100 to 250 MeV. Proton track images were then recorded by confocal microscopy of the FNTDs. Proton tracks in the FNTD images were fit by using a Gaussian function to extract fluorescence amplitudes. Histograms of fluorescence amplitudes were then compared withmore » LET spectra. Results: The authors successfully used FNTDs to register individual proton tracks from high-energy proton therapy beams, allowing reconstruction of 3D images of proton tracks along with delta rays. The track amplitudes from FNTDs could be used to parameterize LET spectra, allowing the LET of individual proton tracks from therapeutic proton beams to be determined. Conclusions: FNTDs can be used to directly visualize proton tracks and their delta rays at the nanoscale level. Because the track intensities in the FNTDs correlate with LET, they could be used further to measure LET of individual proton tracks. This method may be useful for measuring nanoscale radiation quantities and for measuring the LET of individual proton tracks in radiation biology experiments.« less

  5. Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.

    2010-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.

  6. Control algorithms for aerobraking in the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Ward, Donald T.; Shipley, Buford W., Jr.

    1991-01-01

    The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.

  7. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    PubMed Central

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933

  8. Robust online tracking via adaptive samples selection with saliency detection

    NASA Astrophysics Data System (ADS)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  9. Robust Feedback Zoom Tracking for Digital Video Surveillance

    PubMed Central

    Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong

    2012-01-01

    Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called “trace curve”, which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance. PMID:22969388

  10. The effects of tDCS upon sustained visual attention are dependent on cognitive load.

    PubMed

    Roe, James M; Nesheim, Mathias; Mathiesen, Nina C; Moberget, Torgeir; Alnæs, Dag; Sneve, Markus H

    2016-01-08

    Transcranial Direct Current Stimulation (tDCS) modulates the excitability of neuronal responses and consequently can affect performance on a variety of cognitive tasks. However, the interaction between cognitive load and the effects of tDCS is currently not well-understood. We recorded the performance accuracy of participants on a bilateral multiple object tracking task while undergoing bilateral stimulation assumed to enhance (anodal) and decrease (cathodal) neuronal excitability. Stimulation was applied to the posterior parietal cortex (PPC), a region inferred to be at the centre of an attentional tracking network that shows load-dependent activation. 34 participants underwent three separate stimulation conditions across three days. Each subject received (1) left cathodal / right anodal PPC tDCS, (2) left anodal / right cathodal PPC tDCS, and (3) sham tDCS. The number of targets-to-be-tracked was also manipulated, giving a low (one target per visual field), medium (two targets per visual field) or high (three targets per visual field) tracking load condition. It was found that tracking performance at high attentional loads was significantly reduced in both stimulation conditions relative to sham, and this was apparent in both visual fields, regardless of the direction of polarity upon the brain's hemispheres. We interpret this as an interaction between cognitive load and tDCS, and suggest that tDCS may degrade attentional performance when cognitive networks become overtaxed and unable to compensate as a result. Systematically varying cognitive load may therefore be a fruitful direction to elucidate the effects of tDCS upon cognitive functions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Perceptual training yields rapid improvements in visually impaired youth.

    PubMed

    Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje

    2016-11-30

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.

  12. UmUTracker: A versatile MATLAB program for automated particle tracking of 2D light microscopy or 3D digital holography data

    NASA Astrophysics Data System (ADS)

    Zhang, Hanqing; Stangner, Tim; Wiklund, Krister; Rodriguez, Alvaro; Andersson, Magnus

    2017-10-01

    We present a versatile and fast MATLAB program (UmUTracker) that automatically detects and tracks particles by analyzing video sequences acquired by either light microscopy or digital in-line holographic microscopy. Our program detects the 2D lateral positions of particles with an algorithm based on the isosceles triangle transform, and reconstructs their 3D axial positions by a fast implementation of the Rayleigh-Sommerfeld model using a radial intensity profile. To validate the accuracy and performance of our program, we first track the 2D position of polystyrene particles using bright field and digital holographic microscopy. Second, we determine the 3D particle position by analyzing synthetic and experimentally acquired holograms. Finally, to highlight the full program features, we profile the microfluidic flow in a 100 μm high flow chamber. This result agrees with computational fluid dynamic simulations. On a regular desktop computer UmUTracker can detect, analyze, and track multiple particles at 5 frames per second for a template size of 201 ×201 in a 1024 × 1024 image. To enhance usability and to make it easy to implement new functions we used object-oriented programming. UmUTracker is suitable for studies related to: particle dynamics, cell localization, colloids and microfluidic flow measurement. Program Files doi : http://dx.doi.org/10.17632/fkprs4s6xp.1 Licensing provisions : Creative Commons by 4.0 (CC by 4.0) Programming language : MATLAB Nature of problem: 3D multi-particle tracking is a common technique in physics, chemistry and biology. However, in terms of accuracy, reliable particle tracking is a challenging task since results depend on sample illumination, particle overlap, motion blur and noise from recording sensors. Additionally, the computational performance is also an issue if, for example, a computationally expensive process is executed, such as axial particle position reconstruction from digital holographic microscopy data. Versatile robust tracking programs handling these concerns and providing a powerful post-processing option are significantly limited. Solution method: UmUTracker is a multi-functional tool to extract particle positions from long video sequences acquired with either light microscopy or digital holographic microscopy. The program provides an easy-to-use graphical user interface (GUI) for both tracking and post-processing that does not require any programming skills to analyze data from particle tracking experiments. UmUTracker first conduct automatic 2D particle detection even under noisy conditions using a novel circle detector based on the isosceles triangle sampling technique with a multi-scale strategy. To reduce the computational load for 3D tracking, it uses an efficient implementation of the Rayleigh-Sommerfeld light propagation model. To analyze and visualize the data, an efficient data analysis step, which can for example show 4D flow visualization using 3D trajectories, is included. Additionally, UmUTracker is easy to modify with user-customized modules due to the object-oriented programming style Additional comments: Program obtainable from https://sourceforge.net/projects/umutracker/

  13. Expertise Differences in the Comprehension of Visualizations: A Meta-Analysis of Eye-Tracking Research in Professional Domains

    ERIC Educational Resources Information Center

    Gegenfurtner, Andreas; Lehtinen, Erno; Saljo, Roger

    2011-01-01

    This meta-analysis integrates 296 effect sizes reported in eye-tracking research on expertise differences in the comprehension of visualizations. Three theories were evaluated: Ericsson and Kintsch's ("Psychol Rev" 102:211-245, 1995) theory of long-term working memory, Haider and Frensch's ("J Exp Psychol Learn Mem Cognit" 25:172-190, 1999)…

  14. TrAVis to Enhance Online Tutoring and Learning Activities: Real-Time Visualization of Students Tracking Data

    ERIC Educational Resources Information Center

    May, Madeth; George, Sebastien; Prevot, Patrick

    2011-01-01

    Purpose: This paper presents a part of our research work that places an emphasis on Tracking Data Analysis and Visualization (TrAVis) tools, a web-based system, designed to enhance online tutoring and learning activities, supported by computer-mediated communication (CMC) tools. TrAVis is particularly dedicated to assist both tutors and students…

  15. School Type Differences in Attainment of Developmental Goals in Students with Visual Impairment and Sighted Peers

    ERIC Educational Resources Information Center

    Pfeiffer, Jens P.; Pinquart, Martin; Munchow, Hannes

    2012-01-01

    The present study analyzed whether the perceived attainment of developmental goals differs by school type and between adolescents with visual impairments and sighted peers. We created a matched-pair design with 98 German students from the middle school track and 98 from the highest school track; half of the members of each group were visually…

  16. High precision tracking of a piezoelectric nano-manipulator with parameterized hysteresis compensation

    NASA Astrophysics Data System (ADS)

    Yan, Peng; Zhang, Yangming

    2018-06-01

    High performance scanning of nano-manipulators is widely deployed in various precision engineering applications such as SPM (scanning probe microscope), where trajectory tracking of sophisticated reference signals is an challenging control problem. The situation is further complicated when rate dependent hysteresis of the piezoelectric actuators and the stress-stiffening induced nonlinear stiffness of the flexure mechanism are considered. In this paper, a novel control framework is proposed to achieve high precision tracking of a piezoelectric nano-manipulator subjected to hysteresis and stiffness nonlinearities. An adaptive parameterized rate-dependent Prandtl-Ishlinskii model is constructed and the corresponding adaptive inverse model based online compensation is derived. Meanwhile a robust adaptive control architecture is further introduced to improve the tracking accuracy and robustness of the compensated system, where the parametric uncertainties of the nonlinear dynamics can be well eliminated by on-line estimations. Comparative experimental studies of the proposed control algorithm are conducted on a PZT actuated nano-manipulating stage, where hysteresis modeling accuracy and excellent tracking performance are demonstrated in real-time implementations, with significant improvement over existing results.

  17. New robust algorithm for tracking cells in videos of Drosophila morphogenesis based on finding an ideal path in segmented spatio-temporal cellular structures.

    PubMed

    Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal

    2011-01-01

    In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.

  18. Active eye-tracking improves LASIK results.

    PubMed

    Lee, Yuan-Chieh

    2007-06-01

    To study the advantage of active eye-tracking for photorefractive surgery. In a prospective, double-masked study, LASIK for myopia and myopic astigmatism was performed in 50 patients using the ALLEGRETTO WAVE version 1007. All patients received LASIK with full comprehension of the importance of fixation during the procedure. All surgical procedures were performed by a single surgeon. The eye-tracker was turned off in one group (n = 25) and kept on in another group (n = 25). Preoperatively and 3 months postoperatively, patients underwent a standard ophthalmic examination, which included comeal topography. In the patients treated with the eye-tracker off, all had uncorrected visual acuity (UCVA) of > or = 20/40 and 64% had > or = 20/20. Compared with the patients treated with the eye-tracker on, they had higher residual cylindrical astigmatism (P < .05). Those treated with the eye-tracker on achieved better UCVA and best spectacle-corrected visual acuity (P < .05). Spherical error and potential visual acuity (TMS-II) were not significantly different between the groups. The flying-spot system can achieve a fair result without active eye-tracking, but active eye-tracking helps improve the visual outcome and reduces postoperative cylindrical astigmatism.

  19. A Novel Loss Recovery and Tracking Scheme for Maneuvering Target in Hybrid WSNs.

    PubMed

    Qian, Hanwang; Fu, Pengcheng; Li, Baoqing; Liu, Jianpo; Yuan, Xiaobing

    2018-01-25

    Tracking a mobile target, which aims to timely monitor the invasion of specific target, is one of the most prominent applications in wireless sensor networks (WSNs). Traditional tracking methods in WSNs only based on static sensor nodes (SNs) have several critical problems. For example, to void the loss of mobile target, many SNs must be active to track the target in all possible directions, resulting in excessive energy consumption. Additionally, when entering coverage holes in the monitoring area, the mobile target may be missing and then its state is unknown during this period. To tackle these problems, in this paper, a few mobile sensor nodes (MNs) are introduced to cooperate with SNs to form a hybrid WSN due to their stronger abilities and less constrained energy. Then, we propose a valid target tracking scheme for hybrid WSNs to dynamically schedule the MNs and SNs. Moreover, a novel loss recovery mechanism is proposed to find the lost target and recover the tracking with fewer SNs awakened. Furthermore, to improve the robustness and accuracy of the recovery mechanism, an adaptive unscented Kalman filter (AUKF) algorithm is raised to dynamically adjust the process noise covariance. Simulation results demonstrate that our tracking scheme for maneuvering target in hybrid WSNs can not only track the target effectively even if the target is lost but also maintain an excellent accuracy and robustness with fewer activated nodes.

  20. A Novel Loss Recovery and Tracking Scheme for Maneuvering Target in Hybrid WSNs

    PubMed Central

    Liu, Jianpo; Yuan, Xiaobing

    2018-01-01

    Tracking a mobile target, which aims to timely monitor the invasion of specific target, is one of the most prominent applications in wireless sensor networks (WSNs). Traditional tracking methods in WSNs only based on static sensor nodes (SNs) have several critical problems. For example, to void the loss of mobile target, many SNs must be active to track the target in all possible directions, resulting in excessive energy consumption. Additionally, when entering coverage holes in the monitoring area, the mobile target may be missing and then its state is unknown during this period. To tackle these problems, in this paper, a few mobile sensor nodes (MNs) are introduced to cooperate with SNs to form a hybrid WSN due to their stronger abilities and less constrained energy. Then, we propose a valid target tracking scheme for hybrid WSNs to dynamically schedule the MNs and SNs. Moreover, a novel loss recovery mechanism is proposed to find the lost target and recover the tracking with fewer SNs awakened. Furthermore, to improve the robustness and accuracy of the recovery mechanism, an adaptive unscented Kalman filter (AUKF) algorithm is raised to dynamically adjust the process noise covariance. Simulation results demonstrate that our tracking scheme for maneuvering target in hybrid WSNs can not only track the target effectively even if the target is lost but also maintain an excellent accuracy and robustness with fewer activated nodes. PMID:29370103

  1. STAR: an integrated solution to management and visualization of sequencing data.

    PubMed

    Wang, Tao; Liu, Jie; Shen, Li; Tonti-Filippini, Julian; Zhu, Yun; Jia, Haiyang; Lister, Ryan; Whitaker, John W; Ecker, Joseph R; Millar, A Harvey; Ren, Bing; Wang, Wei

    2013-12-15

    Easily visualization of complex data features is a necessary step to conduct studies on next-generation sequencing (NGS) data. We developed STAR, an integrated web application that enables online management, visualization and track-based analysis of NGS data. STAR is a multilayer web service system. On the client side, STAR leverages JavaScript, HTML5 Canvas and asynchronous communications to deliver a smoothly scrolling desktop-like graphical user interface with a suite of in-browser analysis tools that range from providing simple track configuration controls to sophisticated feature detection within datasets. On the server side, STAR supports private session state retention via an account management system and provides data management modules that enable collection, visualization and analysis of third-party sequencing data from the public domain with over thousands of tracks hosted to date. Overall, STAR represents a next-generation data exploration solution to match the requirements of NGS data, enabling both intuitive visualization and dynamic analysis of data. STAR browser system is freely available on the web at http://wanglab.ucsd.edu/star/browser and https://github.com/angell1117/STAR-genome-browser.

  2. Internal Model-Based Robust Tracking Control Design for the MEMS Electromagnetic Micromirror.

    PubMed

    Tan, Jiazheng; Sun, Weijie; Yeow, John T W

    2017-05-26

    The micromirror based on micro-electro-mechanical systems (MEMS) technology is widely employed in different areas, such as scanning, imaging and optical switching. This paper studies the MEMS electromagnetic micromirror for scanning or imaging application. In these application scenarios, the micromirror is required to track the command sinusoidal signal, which can be converted to an output regulation problem theoretically. In this paper, based on the internal model principle, the output regulation problem is solved by designing a robust controller that is able to force the micromirror to track the command signal accurately. The proposed controller relies little on the accuracy of the model. Further, the proposed controller is implemented, and its effectiveness is examined by experiments. The experimental results demonstrate that the performance of the proposed controller is satisfying.

  3. Internal Model-Based Robust Tracking Control Design for the MEMS Electromagnetic Micromirror

    PubMed Central

    Tan, Jiazheng; Sun, Weijie; Yeow, John T. W.

    2017-01-01

    The micromirror based on micro-electro-mechanical systems (MEMS) technology is widely employed in different areas, such as scanning, imaging and optical switching. This paper studies the MEMS electromagnetic micromirror for scanning or imaging application. In these application scenarios, the micromirror is required to track the command sinusoidal signal, which can be converted to an output regulation problem theoretically. In this paper, based on the internal model principle, the output regulation problem is solved by designing a robust controller that is able to force the micromirror to track the command signal accurately. The proposed controller relies little on the accuracy of the model. Further, the proposed controller is implemented, and its effectiveness is examined by experiments. The experimental results demonstrate that the performance of the proposed controller is satisfying. PMID:28587105

  4. Electrophysiological evidence for right frontal lobe dominance in spatial visuomotor learning.

    PubMed

    Lang, W; Lang, M; Kornhuber, A; Kornhuber, H H

    1986-02-01

    Slow negative potential shifts were recorded together with the error made in motor performance when two different groups of 14 students tracked visual stimuli with their right hand. Various visuomotor tasks were compared. A tracking task (T) in which subjects had to track the stimulus directly, showed no decrease of error in motor performance during the experiment. In a distorted tracking task (DT) a continuous horizontal distortion of the visual feedback had to be compensated. The additional demands of this task required visuomotor learning. Another learning condition was a mirrored-tracking task (horizontally inverted tracking, hIT), i.e. an elementary function, such as the concept of changing left and right was interposed between perception and action. In addition, subjects performed a no-tracking control task (NT) in which they started the visual stimulus without tracking it. A slow negative potential shift was associated with the visuomotor performance (TP: tracking potential). In the learning tasks (DT and hIT) this negativity was significantly enhanced over the anterior midline and in hIT frontally and precentrally over both hemispheres. Comparing hIT and T for every subject, the enhancement of the tracking potential in hIT was correlated with the success in motor learning in frontomedial and bilaterally in frontolateral recordings (r = 0.81-0.88). However, comparing DT and T, such a correlation was only found in frontomedial and right frontolateral electrodes (r = 0.5-0.61), but not at the left frontolateral electrode. These experiments are consistent with previous findings and give further neurophysiological evidence for frontal lobe activity in visuomotor learning. The hemispherical asymmetry is discussed in respect to hemispherical specialization (right frontal lobe dominance in spatial visuomotor learning).

  5. Mathematical modelling of animate and intentional motion.

    PubMed Central

    Rittscher, Jens; Blake, Andrew; Hoogs, Anthony; Stein, Gees

    2003-01-01

    Our aim is to enable a machine to observe and interpret the behaviour of others. Mathematical models are employed to describe certain biological motions. The main challenge is to design models that are both tractable and meaningful. In the first part we will describe how computer vision techniques, in particular visual tracking, can be applied to recognize a small vocabulary of human actions in a constrained scenario. Mainly the problems of viewpoint and scale invariance need to be overcome to formalize a general framework. Hence the second part of the article is devoted to the question whether a particular human action should be captured in a single complex model or whether it is more promising to make extensive use of semantic knowledge and a collection of low-level models that encode certain motion primitives. Scene context plays a crucial role if we intend to give a higher-level interpretation rather than a low-level physical description of the observed motion. A semantic knowledge base is used to establish the scene context. This approach consists of three main components: visual analysis, the mapping from vision to language and the search of the semantic database. A small number of robust visual detectors is used to generate a higher-level description of the scene. The approach together with a number of results is presented in the third part of this article. PMID:12689374

  6. Live-cell CRISPR imaging in plants reveals dynamic telomere movements.

    PubMed

    Dreissig, Steven; Schiml, Simon; Schindele, Patrick; Weiss, Oda; Rutten, Twan; Schubert, Veit; Gladilin, Evgeny; Mette, Michael F; Puchta, Holger; Houben, Andreas

    2017-08-01

    Elucidating the spatiotemporal organization of the genome inside the nucleus is imperative to our understanding of the regulation of genes and non-coding sequences during development and environmental changes. Emerging techniques of chromatin imaging promise to bridge the long-standing gap between sequencing studies, which reveal genomic information, and imaging studies that provide spatial and temporal information of defined genomic regions. Here, we demonstrate such an imaging technique based on two orthologues of the bacterial clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR associated protein 9 (Cas9). By fusing eGFP/mRuby2 to catalytically inactive versions of Streptococcus pyogenes and Staphylococcus aureus Cas9, we show robust visualization of telomere repeats in live leaf cells of Nicotiana benthamiana. By tracking the dynamics of telomeres visualized by CRISPR-dCas9, we reveal dynamic telomere movements of up to 2 μm over 30 min during interphase. Furthermore, we show that CRISPR-dCas9 can be combined with fluorescence-labelled proteins to visualize DNA-protein interactions in vivo. By simultaneously using two dCas9 orthologues, we pave the way for the imaging of multiple genomic loci in live plants cells. CRISPR imaging bears the potential to significantly improve our understanding of the dynamics of chromosomes in live plant cells. © 2017 The Authors The Plant Journal published by John Wiley & Sons Ltd and Society for Experimental Biology.

  7. Variable Neural Adaptive Robust Control: A Switched System Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Jianming; Hu, Jianghai; Zak, Stanislaw H.

    2015-05-01

    Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multi-input multi-output uncertain systems. The controllers incorporate a variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. The variable-structure RBF network solves the problem of structure determination associated with fixed-structure RBF networks. It can determine the network structure on-line dynamically by adding or removing radial basis functions according to the tracking performance. The structure variation is taken into account in the stability analysis of the closed-loop system using a switched system approach with the aid of the piecewisemore » quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations.« less

  8. Atrioventricular junction (AVJ) motion tracking: a software tool with ITK/VTK/Qt.

    PubMed

    Pengdong Xiao; Shuang Leng; Xiaodan Zhao; Hua Zou; Ru San Tan; Wong, Philip; Liang Zhong

    2016-08-01

    The quantitative measurement of the Atrioventricular Junction (AVJ) motion is an important index for ventricular functions of one cardiac cycle including systole and diastole. In this paper, a software tool that can conduct AVJ motion tracking from cardiovascular magnetic resonance (CMR) images is presented by using Insight Segmentation and Registration Toolkit (ITK), The Visualization Toolkit (VTK) and Qt. The software tool is written in C++ by using Visual Studio Community 2013 integrated development environment (IDE) containing both an editor and a Microsoft complier. The software package has been successfully implemented. From the software engineering practice, it is concluded that ITK, VTK, and Qt are very handy software systems to implement automatic image analysis functions for CMR images such as quantitative measure of motion by visual tracking.

  9. Robust tracking control of an IPMC actuator using nonsingular terminal sliding mode

    NASA Astrophysics Data System (ADS)

    Khawwaf, Jasim; Zheng, Jinchuan; Lu, Renquan; Al-Ghanimi, Ali; Kazem, Bahaa I.; Man, Zhihong

    2017-09-01

    Ionic polymer metal composite (IPMC) is a highly innovative material that has recently gained attention in many fields such as medical, biomimetic, and micro/nano underwater applications. The main characteristic of IPMC lies in its ability to achieve a large deflection under a fairly low driving voltage. Moreover, its agile, light weight, noiseless and flexible features render it well suited for certain specific applications. Like other smart materials, such as piezoelectric ceramics, IPMC could be used in actuators or sensors. In this paper, we study the application of IPMC as an actuator for underwater use. The goal is to develop a robust feedback controller for the IPMC actuator to track a desired reference whilst dealing with the uncertainties due to the inherent actuator nonlinearity, external disturbance or the variations of working environment. To this end, we first present a nominal model of the IPMC actuator through experimental identification. Next, a nonsingular terminal sliding mode controller is proposed. Lastly, experimental studies are conducted to verify the tracking accuracy and robustness of the designed controller.

  10. Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.

    PubMed

    Souto, David; Kerzel, Dirk

    2013-02-06

    Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.

  11. Dissociable Frontal Controls during Visible and Memory-guided Eye-Tracking of Moving Targets

    PubMed Central

    Ding, Jinhong; Powell, David; Jiang, Yang

    2009-01-01

    When tracking visible or occluded moving targets, several frontal regions including the frontal eye fields (FEF), dorsal-lateral prefrontal cortex (DLPFC), and Anterior Cingulate Cortex (ACC) are involved in smooth pursuit eye movements (SPEM). To investigate how these areas play different roles in predicting future locations of moving targets, twelve healthy college students participated in a smooth pursuit task of visual and occluded targets. Their eye movements and brain responses measured by event-related functional MRI were simultaneously recorded. Our results show that different visual cues resulted in time discrepancies between physical and estimated pursuit time only when the moving dot was occluded. Visible phase velocity gain was higher than that of occlusion phase. We found bilateral FEF association with eye-movement whether moving targets are visible or occluded. However, the DLPFC and ACC showed increased activity when tracking and predicting locations of occluded moving targets, and were suppressed during smooth pursuit of visible targets. When visual cues were increasingly available, less activation in the DLPFC and the ACC was observed. Additionally, there was a significant hemisphere effect in DLPFC, where right DLPFC showed significantly increased responses over left when pursuing occluded moving targets. Correlation results revealed that DLPFC, the right DLPFC in particular, communicates more with FEF during tracking of occluded moving targets (from memory). The ACC modulates FEF more during tracking of visible targets (likely related to visual attention). Our results suggest that DLPFC and ACC modulate FEF and cortical networks differentially during visible and memory-guided eye tracking of moving targets. PMID:19434603

  12. Ciliary muscle contraction force and trapezius muscle activity during manual tracking of a moving visual target.

    PubMed

    Domkin, Dmitry; Forsman, Mikael; Richter, Hans O

    2016-06-01

    Previous studies have shown an association of visual demands during near work and increased activity of the trapezius muscle. Those studies were conducted under stationary postural conditions with fixed gaze and artificial visual load. The present study investigated the relationship between ciliary muscle contraction force and trapezius muscle activity across individuals during performance of a natural dynamic motor task under free gaze conditions. Participants (N=11) tracked a moving visual target with a digital pen on a computer screen. Tracking performance, eye refraction and trapezius muscle activity were continuously measured. Ciliary muscle contraction force was computed from eye accommodative response. There was a significant Pearson correlation between ciliary muscle contraction force and trapezius muscle activity on the tracking side (0.78, p<0.01) and passive side (0.64, p<0.05). The study supports the hypothesis that high visual demands, leading to an increased ciliary muscle contraction during continuous eye-hand coordination, may increase trapezius muscle tension and thus contribute to the development of musculoskeletal complaints in the neck-shoulder area. Further experimental studies are required to clarify whether the relationship is valid within each individual or may represent a general personal trait, when individuals with higher eye accommodative response tend to have higher trapezius muscle activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Model-Based Analysis of Flow-Mediated Dilation and Intima-Media Thickness

    PubMed Central

    Bartoli, G.; Menegaz, G.; Lisi, M.; Di Stolfo, G.; Dragoni, S.; Gori, T.

    2008-01-01

    We present an end-to-end system for the automatic measurement of flow-mediated dilation (FMD) and intima-media thickness (IMT) for the assessment of the arterial function. The video sequences are acquired from a B-mode echographic scanner. A spline model (deformable template) is fitted to the data to detect the artery boundaries and track them all along the video sequence. The a priori knowledge about the image features and its content is exploited. Preprocessing is performed to improve both the visual quality of video frames for visual inspection and the performance of the segmentation algorithm without affecting the accuracy of the measurements. The system allows real-time processing as well as a high level of interactivity with the user. This is obtained by a graphical user interface (GUI) enabling the cardiologist to supervise the whole process and to eventually reset the contour extraction at any point in time. The system was validated and the accuracy, reproducibility, and repeatability of the measurements were assessed with extensive in vivo experiments. Jointly with the user friendliness, low cost, and robustness, this makes the system suitable for both research and daily clinical use. PMID:19360110

  14. Near field ice detection using infrared based optical imaging technology

    NASA Astrophysics Data System (ADS)

    Abdel-Moati, Hazem; Morris, Jonathan; Zeng, Yousheng; Corie, Martin Wesley; Yanni, Victor Garas

    2018-02-01

    If not detected and characterized, icebergs can potentially pose a hazard to oil and gas exploration, development and production operations in arctic environments as well as commercial shipping channels. In general, very large bergs are tracked and predicted using models or satellite imagery. Small and medium bergs are detectable using conventional marine radar. As icebergs decay they shed bergy bits and growlers, which are much smaller and more difficult to detect. Their low profile above the water surface, in addition to occasional relatively high seas, makes them invisible to conventional marine radar. Visual inspection is the most common method used to detect bergy bits and growlers, but the effectiveness of visual inspections is reduced by operator fatigue and low light conditions. The potential hazard from bergy bits and growlers is further increased by short detection range (<1 km). As such, there is a need for robust and autonomous near-field detection of such smaller icebergs. This paper presents a review of iceberg detection technology and explores applications for infrared imagers in the field. Preliminary experiments are performed and recommendations are made for future work, including a proposed imager design which would be suited for near field ice detection.

  15. Design and development of a robust ATP subsystem for the Altair UAV-to-Ground lasercomm 2.5 Gbps demonstration

    NASA Technical Reports Server (NTRS)

    Ortiz, G. G.; Lee, S.; Monacos, S.; Wright, M.; Biswas, A.

    2003-01-01

    A robust acquisition, tracking and pointing (ATP) subsystem is being developed for the 2.5 Gigabit per second (Gbps) Unmanned-Aerial-Vehicle (UAV) to ground free-space optical communications link project.

  16. Human motion tracking by temporal-spatial local gaussian process experts.

    PubMed

    Zhao, Xu; Fu, Yun; Liu, Yuncai

    2011-04-01

    Human pose estimation via motion tracking systems can be considered as a regression problem within a discriminative framework. It is always a challenging task to model the mapping from observation space to state space because of the high-dimensional characteristic in the multimodal conditional distribution. In order to build the mapping, existing techniques usually involve a large set of training samples in the learning process which are limited in their capability to deal with multimodality. We propose, in this work, a novel online sparse Gaussian Process (GP) regression model to recover 3-D human motion in monocular videos. Particularly, we investigate the fact that for a given test input, its output is mainly determined by the training samples potentially residing in its local neighborhood and defined in the unified input-output space. This leads to a local mixture GP experts system composed of different local GP experts, each of which dominates a mapping behavior with the specific covariance function adapting to a local region. To handle the multimodality, we combine both temporal and spatial information therefore to obtain two categories of local experts. The temporal and spatial experts are integrated into a seamless hybrid system, which is automatically self-initialized and robust for visual tracking of nonlinear human motion. Learning and inference are extremely efficient as all the local experts are defined online within very small neighborhoods. Extensive experiments on two real-world databases, HumanEva and PEAR, demonstrate the effectiveness of our proposed model, which significantly improve the performance of existing models.

  17. 49 CFR 213.365 - Visual inspections.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... made on foot or by riding over the track in a vehicle at a speed that allows the person making the... over track crossings and turnouts, otherwise, the inspection vehicle speed shall be at the sole... unobstructed by any cause and that the second track is not centered more than 30 feet from the track upon which...

  18. 49 CFR 213.365 - Visual inspections.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... made on foot or by riding over the track in a vehicle at a speed that allows the person making the... over track crossings and turnouts, otherwise, the inspection vehicle speed shall be at the sole... unobstructed by any cause and that the second track is not centered more than 30 feet from the track upon which...

  19. RATT: RFID Assisted Tracking Tile. Preliminary results.

    PubMed

    Quinones, Dario R; Cuevas, Aaron; Cambra, Javier; Canals, Santiago; Moratal, David

    2017-07-01

    Behavior is one of the most important aspects of animal life. This behavior depends on the link between animals, their nervous systems and their environment. In order to study the behavior of laboratory animals several tools are needed, but a tracking tool is essential to perform a thorough behavioral study. Currently, several visual tracking tools are available. However, they have some drawbacks. For instance, when an animal is inside a cave, or is close to other animals, the tracking cameras cannot always detect the location or movement of this animal. This paper presents RFID Assisted Tracking Tile (RATT), a tracking system based on passive Radio Frequency Identification (RFID) technology in high frequency band according to ISO/IEC 15693. The RATT system is composed of electronic tiles that have nine active RFID antennas attached; in addition, it contains several overlapping passive coils to improve the magnetic field characteristics. Using several tiles, a large surface can be built on which the animals can move, allowing identification and tracking of their movements. This system, that could also be combined with a visual tracking system, paves the way for complete behavioral studies.

  20. Multi-Complementary Model for Long-Term Tracking

    PubMed Central

    Zhang, Deng; Zhang, Junchang; Xia, Chenyang

    2018-01-01

    In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170

  1. Experience-dependent plasticity from eye opening enables lasting, visual cortex-dependent enhancement of motion vision.

    PubMed

    Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M

    2008-09-24

    Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.

  2. Long-range eye tracking: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayaweera, S.K.; Lu, Shin-yee

    1994-08-24

    The design considerations for a long-range Purkinje effects based video tracking system using current technology is presented. Past work, current experiments, and future directions are thoroughly discussed, with an emphasis on digital signal processing techniques and obstacles. It has been determined that while a robust, efficient, long-range, and non-invasive eye tracking system will be difficult to develop, such as a project is indeed feasible.

  3. MR techniques for guiding high-intensity focused ultrasound (HIFU) treatments.

    PubMed

    Kuroda, Kagayaki

    2018-02-01

    To make full use of the ability of magnetic resonance (MR) to guide high-intensity focused ultrasound (HIFU) treatment, effort has been made to improve techniques for thermometry, motion tracking, and sound beam visualization. For monitoring rapid temperature elevation with proton resonance frequency (PRF) shift, data acquisition and processing can be accelerated with parallel imaging and/or sparse sampling in conjunction with appropriate signal processing methods. Thermometry should be robust against tissue motion, motion-induced magnetic field variation, and susceptibility change. Thus, multibaseline, referenceless, or hybrid techniques have become important. In cases with adipose or bony tissues, for which PRF shift cannot be used, thermometry with relaxation times or signal intensity may be utilized. Motion tracking is crucial not only for thermometry but also for targeting the focus of an ultrasound in moving organs such as the liver, kidney, or heart. Various techniques for motion tracking, such as those based on an anatomical image atlas with optical-flow displacement detection, a navigator echo to seize the diaphragm position, and/or rapid imaging to track vessel positions, have been proposed. Techniques for avoiding the ribcage and near-field heating have also been examined. MR acoustic radiation force imaging (MR-ARFI) is an alternative to thermometry that can identify the location and shape of the focal spot and sound beam path. This technique could be useful for treating heterogeneous tissue regions or performing transcranial therapy. All of these developments, which will be discussed further in this review, expand the applicability of HIFU treatments to a variety of clinical targets while maintaining safety and precision. 2 Technical Efficacy: Stage 4 J. Magn. Reson. Imaging 2018;47:316-331. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Associated reactions during a visual pursuit position tracking task in hemiplegic and quadriplegic cerebral palsy.

    PubMed

    Chiu, Hsiu-Ching; Halaki, Mark; O'Dwyer, Nicholas

    2013-04-30

    Most previous studies of associated reactions (ARs) in people with cerebral palsy have used observation scales, such as recording the degree of movement through observation. The sensitive quantitative method can detect ARs that are not amply visible. The aim of this study was to provide quantitative measures of ARs during a visual pursuit position tracking task. Twenty-three hemiplegia (H) (mean +/- SD: 21y 8m +/- 11y 10m), twelve quadriplegia (Q) (21y 5m +/- 10y 3m) and twenty-two subjects with normal development (N) (21y 2m +/- 10y 10m) participated in the study. An upper limb visual pursuit tracking task was used to study ARs. The participants were required to follow a moving target with a response cursor via elbow flexion and extension movements. The occurrence of ARs was quantified by the overall coherence between the movements of tracking and non-tracking limbs and the amount of movement due to ARs was quantified by the amplitude of movement the non-tracking limbs. The amplitude of movement of the non-tracking limb indicated that the amount of ARs was larger in the Q group than the H and N groups with no significant differences between the H and N groups. The amplitude of movement of the non-tracking limb was larger during non-dominant than dominant tracking in all three groups. Some movements in the non-tracking limb were correlated with the tracking limb (correlated ARs) and some movements that were not correlated with the tracking limb (uncorrelated ARs). The correlated ARs comprised less than 40% of the total ARs for all three groups. Correlated ARs were negatively associated with clinical evaluations, but not the uncorrelated ARs. The correlated and uncorrelated ARs appear to have different relationships with clinical evaluations, implying the effect of ARs on upper limb activities could be varied.

  5. Eye-Tracking Provides a Sensitive Measure of Exploration Deficits After Acute Right MCA Stroke

    PubMed Central

    Delazer, Margarete; Sojer, Martin; Ellmerer, Philipp; Boehme, Christian; Benke, Thomas

    2018-01-01

    The eye-tracking study aimed at assessing spatial biases in visual exploration in patients after acute right MCA (middle cerebral artery) stroke. Patients affected by unilateral neglect show less functional recovery and experience severe difficulties in everyday life. Thus, accurate diagnosis is essential, and specific treatment is required. Early assessment is of high importance as rehabilitative interventions are more effective when applied soon after stroke. Previous research has shown that deficits may be overlooked when classical paper-and-pencil tasks are used for diagnosis. Conversely, eye-tracking allows direct monitoring of visual exploration patterns. We hypothesized that the analysis of eye-tracking provides more sensitive measures for spatial exploration deficits after right middle cerebral artery stroke. Twenty-two patients with right MCA stroke (median 5 days after stroke) and 28 healthy controls were included. Lesions were confirmed by MRI/CCT. Groups performed comparably in the Mini–Mental State Examination (patients and controls median 29) and in a screening of executive functions. Eleven patients scored at ceiling in neglect screening tasks, 11 showed minimal to severe signs of unilateral visual neglect. An overlap plot based on MRI and CCT imaging showed lesions in the temporo–parieto–frontal cortex, basal ganglia, and adjacent white matter tracts. Visual exploration was evaluated in two eye-tracking tasks, one assessing free visual exploration of photographs, the other visual search using symbols and letters. An index of fixation asymmetries proved to be a sensitive measure of spatial exploration deficits. Both patient groups showed a marked exploration bias to the right when looking at complex photographs. A single case analysis confirmed that also most of those patients who showed no neglect in screening tasks performed outside the range of controls in free exploration. The analysis of patients’ scoring at ceiling in neglect screening tasks is of special interest, as possible deficits may be overlooked and thus remain untreated. Our findings are in line with other studies suggesting considerable limitations of laboratory screening procedures to fully appreciate the occurrence of neglect symptoms. Future investigations are needed to explore the predictive value of the eye-tracking index and its validity in everyday situations.

  6. Registration using natural features for augmented reality systems.

    PubMed

    Yuan, M L; Ong, S K; Nee, A Y C

    2006-01-01

    Registration is one of the most difficult problems in augmented reality (AR) systems. In this paper, a simple registration method using natural features based on the projective reconstruction technique is proposed. This method consists of two steps: embedding and rendering. Embedding involves specifying four points to build the world coordinate system on which a virtual object will be superimposed. In rendering, the Kanade-Lucas-Tomasi (KLT) feature tracker is used to track the natural feature correspondences in the live video. The natural features that have been tracked are used to estimate the corresponding projective matrix in the image sequence. Next, the projective reconstruction technique is used to transfer the four specified points to compute the registration matrix for augmentation. This paper also proposes a robust method for estimating the projective matrix, where the natural features that have been tracked are normalized (translation and scaling) and used as the input data. The estimated projective matrix will be used as an initial estimate for a nonlinear optimization method that minimizes the actual residual errors based on the Levenberg-Marquardt (LM) minimization method, thus making the results more robust and stable. The proposed registration method has three major advantages: 1) It is simple, as no predefined fiducials or markers are used for registration for either indoor and outdoor AR applications. 2) It is robust, because it remains effective as long as at least six natural features are tracked during the entire augmentation, and the existence of the corresponding projective matrices in the live video is guaranteed. Meanwhile, the robust method to estimate the projective matrix can obtain stable results even when there are some outliers during the tracking process. 3) Virtual objects can still be superimposed on the specified areas, even if some parts of the areas are occluded during the entire process. Some indoor and outdoor experiments have been conducted to validate the performance of this proposed method.

  7. Using cluster analysis to organize and explore regional GPS velocities

    USGS Publications Warehouse

    Simpson, Robert W.; Thatcher, Wayne; Savage, James C.

    2012-01-01

    Cluster analysis offers a simple visual exploratory tool for the initial investigation of regional Global Positioning System (GPS) velocity observations, which are providing increasingly precise mappings of actively deforming continental lithosphere. The deformation fields from dense regional GPS networks can often be concisely described in terms of relatively coherent blocks bounded by active faults, although the choice of blocks, their number and size, can be subjective and is often guided by the distribution of known faults. To illustrate our method, we apply cluster analysis to GPS velocities from the San Francisco Bay Region, California, to search for spatially coherent patterns of deformation, including evidence of block-like behavior. The clustering process identifies four robust groupings of velocities that we identify with four crustal blocks. Although the analysis uses no prior geologic information other than the GPS velocities, the cluster/block boundaries track three major faults, both locked and creeping.

  8. Rotation of endosomes demonstrates coordination of molecular motors during axonal transport.

    PubMed

    Kaplan, Luke; Ierokomos, Athena; Chowdary, Praveen; Bryant, Zev; Cui, Bianxiao

    2018-03-01

    Long-distance axonal transport is critical to the maintenance and function of neurons. Robust transport is ensured by the coordinated activities of multiple molecular motors acting in a team. Conventional live-cell imaging techniques used in axonal transport studies detect this activity by visualizing the translational dynamics of a cargo. However, translational measurements are insensitive to torques induced by motor activities. By using gold nanorods and multichannel polarization microscopy, we simultaneously measure the rotational and translational dynamics for thousands of axonally transported endosomes. We find that the rotational dynamics of an endosome provide complementary information regarding molecular motor activities to the conventionally tracked translational dynamics. Rotational dynamics correlate with translational dynamics, particularly in cases of increased rotation after switches between kinesin- and dynein-mediated transport. Furthermore, unambiguous measurement of nanorod angle shows that endosome-contained nanorods align with the orientation of microtubules, suggesting a direct mechanical linkage between the ligand-receptor complex and the microtubule motors.

  9. Visual Target Tracking on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Kim, Won; Biesiadecki, Jeffrey; Ali, Khaled

    2008-01-01

    Visual target tracking (VTT) software has been incorporated into Release 9.2 of the Mars Exploration Rover (MER) flight software, now running aboard the rovers Spirit and Opportunity. In the VTT operation (see figure), the rover is driven in short steps between stops and, at each stop, still images are acquired by actively aimed navigation cameras (navcams) on a mast on the rover (see artistic rendition). The VTT software processes the digitized navcam images so as to track a target reliably and to make it possible to approach the target accurately to within a few centimeters over a 10-m traverse.

  10. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    PubMed

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.

  11. Evaluation of tactual displays for flight control

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Tanner, R. B.; Triggs, T. J.

    1973-01-01

    Manual tracking experiments were conducted to determine the suitability of tactual displays for presenting flight-control information in multitask situations. Although tracking error scores are considerably greater than scores obtained with a continuous visual display, preliminary results indicate that inter-task interference effects are substantially less with the tactual display in situations that impose high visual scanning workloads. The single-task performance degradation found with the tactual display appears to be a result of the coding scheme rather than the use of the tactual sensory mode per se. Analysis with the state-variable pilot/vehicle model shows that reliable predictions of tracking errors can be obtained for wide-band tracking systems once the pilot-related model parameters have been adjusted to reflect the pilot-display interaction.

  12. Robust Short-Lag Spatial Coherence Imaging.

    PubMed

    Nair, Arun Asokan; Tran, Trac Duy; Bell, Muyinatu A Lediju

    2018-03-01

    Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.

  13. Dye-enhanced visualization of rat whiskers for behavioral studies.

    PubMed

    Rigosa, Jacopo; Lucantonio, Alessandro; Noselli, Giovanni; Fassihi, Arash; Zorzin, Erik; Manzino, Fabrizio; Pulecchi, Francesca; Diamond, Mathew E

    2017-06-14

    Visualization and tracking of the facial whiskers is required in an increasing number of rodent studies. Although many approaches have been employed, only high-speed videography has proven adequate for measuring whisker motion and deformation during interaction with an object. However, whisker visualization and tracking is challenging for multiple reasons, primary among them the low contrast of the whisker against its background. Here, we demonstrate a fluorescent dye method suitable for visualization of one or more rat whiskers. The process makes the dyed whisker(s) easily visible against a dark background. The coloring does not influence the behavioral performance of rats trained on a vibrissal vibrotactile discrimination task, nor does it affect the whiskers' mechanical properties.

  14. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    NASA Astrophysics Data System (ADS)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  15. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    PubMed

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  16. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    PubMed Central

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  17. Endocardial left ventricle feature tracking and reconstruction from tri-plane trans-esophageal echocardiography data

    NASA Astrophysics Data System (ADS)

    Dangi, Shusil; Ben-Zikri, Yehuda K.; Cahill, Nathan; Schwarz, Karl Q.; Linte, Cristian A.

    2015-03-01

    Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and "on-the-fly" computer-assisted assessment of ejection fraction for cardiac function monitoring.Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and on-the- y" computer-assisted assessment of ejection fraction for cardiac function monitoring.

  18. Detection of Ballast Damage by In-Situ Vibration Measurement of Sleepers

    NASA Astrophysics Data System (ADS)

    Lam, H. F.; Wong, M. T.; Keefe, R. M.

    2010-05-01

    Ballasted track is one of the most important elements of railway transportation systems worldwide. Owing to its importance in railway safety, many monitoring and evaluation methods have been developed. Current railway track monitoring systems are comprehensive, fast and efficient in testing railway track level and alignment, rail gauge, rail corrugation, etc. However, the monitoring of ballast condition still relies very much on visual inspection and core tests. Although extensive research has been carried out in the development of non-destructive methods for ballast condition evaluation, a commonly accepted and cost-effective method is still in demand. In Hong Kong practice, if abnormal train vibration is reported by the train operator or passengers, permanent way inspectors will locate the problem area by track geometry measurement. It must be pointed out that visual inspection can only identify ballast damage on the track surface, the track geometry deficiencies and rail twists can be detected using a track gauge. Ballast damage under the sleeper loading area and the ballast shoulder, which are the main factors affecting track stability and ride quality, are extremely difficult if not impossible to be detected by visual inspection. Core test is a destructive test, which is expensive, time consuming and may be disruptive to traffic. A fast real-time ballast damage detection method that can be implemented by permanent way inspectors with simple equipment can certainly provide valuable information for engineers in assessing the safety and riding quality of ballasted track systems. The main objective of this paper is to study the feasibility in using the vibration characteristics of sleepers in quantifying the ballast condition under the sleepers, and so as to explore the possibility in developing a handy method for the detection of ballast damage based on the measured vibration of sleepers.

  19. Vehicle Guidance and Control Along Circular Trajectories

    DTIC Science & Technology

    1992-09-01

    the line of sight, while Chism [2] studied a cross track error based control law. Hawkinson [3] extended the results to the multiple input case when...Thesis, Naval Postgraduate School, Monterey, California, June. 2. Chism , S., (1990) "Robust path tracking of autonomous underwater vehicles using sliding

  20. Insect cyborgs: a new frontier in flight control systems

    NASA Astrophysics Data System (ADS)

    Reissman, Timothy; Crawford, Jackie H.; Garcia, Ephrahim

    2007-04-01

    The development of a micro-UAV via a cybernetic organism, primarily the Manduca sexta moth, is presented. An observer to gather output data of the system response of the moth is given by means of an image following system. The visual tracking was implemented to gather the required information about the time history of the moth's six degrees of freedom. This was performed with three cameras tracking a white line as a marker on the moth's thorax to maximize contrast between the moth and the marker. Evaluation of the implemented six degree of freedom visual tracking system finds precision greater than 0.1 mm within three standard deviations and accuracy on the order of 1 mm. Acoustic and visual response systems are presented to lay the groundwork for creating a stochastic response catalog of the organisms to varied stimuli.

  1. Fixed-base simulator study of the effect of time delays in visual cues on pilot tracking performance

    NASA Technical Reports Server (NTRS)

    Queijo, M. J.; Riley, D. R.

    1975-01-01

    Factors were examined which determine the amount of time delay acceptable in the visual feedback loop in flight simulators. Acceptable time delays are defined as delays which significantly affect neither the results nor the manner in which the subject 'flies' the simulator. The subject tracked a target aircraft as it oscillated sinusoidally in a vertical plane only. The pursuing aircraft was permitted five degrees of freedom. Time delays of from 0.047 to 0.297 second were inserted in the visual feedback loop. A side task was employed to maintain the workload constant and to insure that the pilot was fully occupied during the experiment. Tracking results were obtained for 17 aircraft configurations having different longitudinal short-period characteristics. Results show a positive correlation between improved handling qualities and a longer acceptable time delay.

  2. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  3. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  4. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  5. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  6. Perceptual training yields rapid improvements in visually impaired youth

    PubMed Central

    Nyquist, Jeffrey B.; Lappin, Joseph S.; Zhang, Ruyuan; Tadin, Duje

    2016-01-01

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events. PMID:27901026

  7. Development of a new time domain-based algorithm for train detection and axle counting

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Meli, E.; Pugi, L.

    2015-12-01

    This paper presents an innovative train detection algorithm, able to perform the train localisation and, at the same time, to estimate its speed, the crossing times on a fixed point of the track and the axle number. The proposed solution uses the same approach to evaluate all these quantities, starting from the knowledge of generic track inputs directly measured on the track (for example, the vertical forces on the sleepers, the rail deformation and the rail stress). More particularly, all the inputs are processed through cross-correlation operations to extract the required information in terms of speed, crossing time instants and axle counter. This approach has the advantage to be simple and less invasive than the standard ones (it requires less equipment) and represents a more reliable and robust solution against numerical noise because it exploits the whole shape of the input signal and not only the peak values. A suitable and accurate multibody model of railway vehicle and flexible track has also been developed by the authors to test the algorithm when experimental data are not available and in general, under any operating conditions (fundamental to verify the algorithm accuracy and robustness). The railway vehicle chosen as benchmark is the Manchester Wagon, modelled in the Adams VI-Rail environment. The physical model of the flexible track has been implemented in the Matlab and Comsol Multiphysics environments. A simulation campaign has been performed to verify the performance and the robustness of the proposed algorithm, and the results are quite promising. The research has been carried out in cooperation with Ansaldo STS and ECM Spa.

  8. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Robust pedestrian detection and tracking from a moving vehicle

    NASA Astrophysics Data System (ADS)

    Tuong, Nguyen Xuan; Müller, Thomas; Knoll, Alois

    2011-01-01

    In this paper, we address the problem of multi-person detection, tracking and distance estimation in a complex scenario using multi-cameras. Specifically, we are interested in a vision system for supporting the driver in avoiding any unwanted collision with the pedestrian. We propose an approach using Histograms of Oriented Gradients (HOG) to detect pedestrians on static images and a particle filter as a robust tracking technique to follow targets from frame to frame. Because the depth map requires expensive computation, we extract depth information of targets using Direct Linear Transformation (DLT) to reconstruct 3D-coordinates of correspondent points found by running Speeded Up Robust Features (SURF) on two input images. Using the particle filter the proposed tracker can efficiently handle target occlusions in a simple background environment. However, to achieve reliable performance in complex scenarios with frequent target occlusions and complex cluttered background, results from the detection module are integrated to create feedback and recover the tracker from tracking failures due to the complexity of the environment and target appearance model variability. The proposed approach is evaluated on different data sets both in a simple background scenario and a cluttered background environment. The result shows that, by integrating detector and tracker, a reliable and stable performance is possible even if occlusion occurs frequently in highly complex environment. A vision-based collision avoidance system for an intelligent car, as a result, can be achieved.

  10. Self-adaptive robot training of stroke survivors for continuous tracking movements.

    PubMed

    Vergaro, Elena; Casadio, Maura; Squeri, Valentina; Giannoni, Psiche; Morasso, Pietro; Sanguineti, Vittorio

    2010-03-15

    Although robot therapy is progressively becoming an accepted method of treatment for stroke survivors, few studies have investigated how to adapt the robot/subject interaction forces in an automatic way. The paper is a feasibility study of a novel self-adaptive robot controller to be applied with continuous tracking movements. The haptic robot Braccio di Ferro is used, in relation with a tracking task. The proposed control architecture is based on three main modules: 1) a force field generator that combines a non linear attractive field and a viscous field; 2) a performance evaluation module; 3) an adaptive controller. The first module operates in a continuous time fashion; the other two modules operate in an intermittent way and are triggered at the end of the current block of trials. The controller progressively decreases the gain of the force field, within a session, but operates in a non monotonic way between sessions: it remembers the minimum gain achieved in a session and propagates it to the next one, which starts with a block whose gain is greater than the previous one. The initial assistance gains are chosen according to a minimal assistance strategy. The scheme can also be applied with closed eyes in order to enhance the role of proprioception in learning and control. The preliminary results with a small group of patients (10 chronic hemiplegic subjects) show that the scheme is robust and promotes a statistically significant improvement in performance indicators as well as a recalibration of the visual and proprioceptive channels. The results confirm that the minimally assistive, self-adaptive strategy is well tolerated by severely impaired subjects and is beneficial also for less severe patients. The experiments provide detailed information about the stability and robustness of the adaptive controller of robot assistance that could be quite relevant for the design of future large scale controlled clinical trials. Moreover, the study suggests that including continuous movement in the repertoire of training is acceptable also by rather severely impaired subjects and confirms the stabilizing effect of alternating vision/no vision trials already found in previous studies.

  11. Three-Dimensional Visualization of Particle Tracks.

    ERIC Educational Resources Information Center

    Julian, Glenn M.

    1993-01-01

    Suggests ways to bring home to the introductory physics student some of the excitement of recent discoveries in particle physics. Describes particle detectors and encourages the use of the Standard Model along with real images of particle tracks to determine three-dimensional views of tracks. (MVL)

  12. Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images

    PubMed Central

    Zhao, Haiying; Liu, Yong; Xie, Xiaojia; Liao, Yiyi; Liu, Xixi

    2016-01-01

    Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. PMID:27399704

  13. Attentional enhancement during multiple-object tracking.

    PubMed

    Drew, Trafton; McCollough, Andrew W; Horowitz, Todd S; Vogel, Edward K

    2009-04-01

    What is the role of attention in multiple-object tracking? Does attention enhance target representations, suppress distractor representations, or both? It is difficult to ask this question in a purely behavioral paradigm without altering the very attentional allocation one is trying to measure. In the present study, we used event-related potentials to examine the early visual evoked responses to task-irrelevant probes without requiring an additional detection task. Subjects tracked two targets among four moving distractors and four stationary distractors. Brief probes were flashed on targets, moving distractors, stationary distractors, or empty space. We obtained a significant enhancement of the visually evoked P1 and N1 components (approximately 100-150 msec) for probes on targets, relative to distractors. Furthermore, good trackers showed larger differences between target and distractor probes than did poor trackers. These results provide evidence of early attentional enhancement of tracked target items and also provide a novel approach to measuring attentional allocation during tracking.

  14. The effect of attention loading on the inhibition of choice reaction time to visual motion by concurrent rotary motion

    NASA Technical Reports Server (NTRS)

    Looper, M.

    1976-01-01

    This study investigates the influence of attention loading on the established intersensory effects of passive bodily rotation on choice reaction time (RT) to visual motion. Subjects sat at the center of rotation in an enclosed rotating chamber and observed an oscilloscope on which were, in the center, a tracking display and, 10 deg left of center, a RT line. Three tracking tasks and a no-tracking control condition were presented to all subjects in combination with the RT task, which occurred with and without concurrent cab rotations. Choice RT to line motions was inhibited (probability less than .001) both when there was simultaneous vestibular stimulation and when there was a tracking task; response latencies lengthened progressively with increased similarity between the RT and tracking tasks. However, the attention conditions did not affect the intersensory effect; the significance of this for the nature of the sensory interaction is discussed.

  15. Novel Methods for Analysing Bacterial Tracks Reveal Persistence in Rhodobacter sphaeroides

    PubMed Central

    Rosser, Gabriel; Fletcher, Alexander G.; Wilkinson, David A.; de Beyer, Jennifer A.; Yates, Christian A.; Armitage, Judith P.; Maini, Philip K.; Baker, Ruth E.

    2013-01-01

    Tracking bacteria using video microscopy is a powerful experimental approach to probe their motile behaviour. The trajectories obtained contain much information relating to the complex patterns of bacterial motility. However, methods for the quantitative analysis of such data are limited. Most swimming bacteria move in approximately straight lines, interspersed with random reorientation phases. It is therefore necessary to segment observed tracks into swimming and reorientation phases to extract useful statistics. We present novel robust analysis tools to discern these two phases in tracks. Our methods comprise a simple and effective protocol for removing spurious tracks from tracking datasets, followed by analysis based on a two-state hidden Markov model, taking advantage of the availability of mutant strains that exhibit swimming-only or reorientating-only motion to generate an empirical prior distribution. Using simulated tracks with varying levels of added noise, we validate our methods and compare them with an existing heuristic method. To our knowledge this is the first example of a systematic assessment of analysis methods in this field. The new methods are substantially more robust to noise and introduce less systematic bias than the heuristic method. We apply our methods to tracks obtained from the bacterial species Rhodobacter sphaeroides and Escherichia coli. Our results demonstrate that R. sphaeroides exhibits persistence over the course of a tumbling event, which is a novel result with important implications in the study of this and similar species. PMID:24204227

  16. What triggers catch-up saccades during visual tracking?

    PubMed

    de Brouwer, Sophie; Yuksel, Demet; Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe

    2002-03-01

    When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).

  17. An integrated approach to endoscopic instrument tracking for augmented reality applications in surgical simulation training.

    PubMed

    Loukas, Constantinos; Lahanas, Vasileios; Georgiou, Evangelos

    2013-12-01

    Despite the popular use of virtual and physical reality simulators in laparoscopic training, the educational potential of augmented reality (AR) has not received much attention. A major challenge is the robust tracking and three-dimensional (3D) pose estimation of the endoscopic instrument, which are essential for achieving interaction with the virtual world and for realistic rendering when the virtual scene is occluded by the instrument. In this paper we propose a method that addresses these issues, based solely on visual information obtained from the endoscopic camera. Two different tracking algorithms are combined for estimating the 3D pose of the surgical instrument with respect to the camera. The first tracker creates an adaptive model of a colour strip attached to the distal part of the tool (close to the tip). The second algorithm tracks the endoscopic shaft, using a combined Hough-Kalman approach. The 3D pose is estimated with perspective geometry, using appropriate measurements extracted by the two trackers. The method has been validated on several complex image sequences for its tracking efficiency, pose estimation accuracy and applicability in AR-based training. Using a standard endoscopic camera, the absolute average error of the tip position was 2.5 mm for working distances commonly found in laparoscopic training. The average error of the instrument's angle with respect to the camera plane was approximately 2°. The results are also supplemented by video segments of laparoscopic training tasks performed in a physical and an AR environment. The experiments yielded promising results regarding the potential of applying AR technologies for laparoscopic skills training, based on a computer vision framework. The issue of occlusion handling was adequately addressed. The estimated trajectory of the instruments may also be used for surgical gesture interpretation and assessment. Copyright © 2013 John Wiley & Sons, Ltd.

  18. STAR: an integrated solution to management and visualization of sequencing data

    PubMed Central

    Wang, Tao; Liu, Jie; Shen, Li; Tonti-Filippini, Julian; Zhu, Yun; Jia, Haiyang; Lister, Ryan; Whitaker, John W.; Ecker, Joseph R.; Millar, A. Harvey; Ren, Bing; Wang, Wei

    2013-01-01

    Motivation: Easily visualization of complex data features is a necessary step to conduct studies on next-generation sequencing (NGS) data. We developed STAR, an integrated web application that enables online management, visualization and track-based analysis of NGS data. Results: STAR is a multilayer web service system. On the client side, STAR leverages JavaScript, HTML5 Canvas and asynchronous communications to deliver a smoothly scrolling desktop-like graphical user interface with a suite of in-browser analysis tools that range from providing simple track configuration controls to sophisticated feature detection within datasets. On the server side, STAR supports private session state retention via an account management system and provides data management modules that enable collection, visualization and analysis of third-party sequencing data from the public domain with over thousands of tracks hosted to date. Overall, STAR represents a next-generation data exploration solution to match the requirements of NGS data, enabling both intuitive visualization and dynamic analysis of data. Availability and implementation: STAR browser system is freely available on the web at http://wanglab.ucsd.edu/star/browser and https://github.com/angell1117/STAR-genome-browser. Contact: wei-wang@ucsd.edu PMID:24078702

  19. Visualization of spatial-temporal data based on 3D virtual scene

    NASA Astrophysics Data System (ADS)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  20. Quantifying Novice and Expert Differences in Visual Diagnostic Reasoning in Veterinary Pathology Using Eye-Tracking Technology.

    PubMed

    Warren, Amy L; Donnon, Tyrone L; Wagg, Catherine R; Priest, Heather; Fernandez, Nicole J

    2018-01-18

    Visual diagnostic reasoning is the cognitive process by which pathologists reach a diagnosis based on visual stimuli (cytologic, histopathologic, or gross imagery). Currently, there is little to no literature examining visual reasoning in veterinary pathology. The objective of the study was to use eye tracking to establish baseline quantitative and qualitative differences between the visual reasoning processes of novice and expert veterinary pathologists viewing cytology specimens. Novice and expert participants were each shown 10 cytology images and asked to formulate a diagnosis while wearing eye-tracking equipment (10 slides) and while concurrently verbalizing their thought processes using the think-aloud protocol (5 slides). Compared to novices, experts demonstrated significantly higher diagnostic accuracy (p<.017), shorter time to diagnosis (p<.017), and a higher percentage of time spent viewing areas of diagnostic interest (p<.017). Experts elicited more key diagnostic features in the think-aloud protocol and had more efficient patterns of eye movement. These findings suggest that experts' fast time to diagnosis, efficient eye-movement patterns, and preference for viewing areas of interest supports system 1 (pattern-recognition) reasoning and script-inductive knowledge structures with system 2 (analytic) reasoning to verify their diagnosis.

  1. Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.

    PubMed

    Bae, Seung-Hwan; Yoon, Kuk-Jin

    2018-03-01

    Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.

  2. Robust Neural Sliding Mode Control of Robot Manipulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen Tran Hiep; Pham Thuong Cat

    2009-03-05

    This paper proposes a robust neural sliding mode control method for robot tracking problem to overcome the noises and large uncertainties in robot dynamics. The Lyapunov direct method has been used to prove the stability of the overall system. Simulation results are given to illustrate the applicability of the proposed method.

  3. How Revisions to Mathematical Visuals Affect Cognition: Evidence from Eye Tracking

    ERIC Educational Resources Information Center

    Clinton, Virginia; Cooper, Jennifer L.; Michaelis, Joseph; Alibali, Martha W.; Nathan, Mitchell J.

    2017-01-01

    Mathematics curricula are frequently rich with visuals, but these visuals are often not designed for optimal use of students' limited cognitive resources. The authors of this study revised the visuals in a mathematics lesson based on instructional design principles. The purpose of this study is to examine the effects of these revised visuals on…

  4. Eye tracking measures of uncertainty during perceptual decision making.

    PubMed

    Brunyé, Tad T; Gardony, Aaron L

    2017-10-01

    Perceptual decision making involves gathering and interpreting sensory information to effectively categorize the world and inform behavior. For instance, a radiologist distinguishing the presence versus absence of a tumor, or a luggage screener categorizing objects as threatening or non-threatening. In many cases, sensory information is not sufficient to reliably disambiguate the nature of a stimulus, and resulting decisions are done under conditions of uncertainty. The present study asked whether several oculomotor metrics might prove sensitive to transient states of uncertainty during perceptual decision making. Participants viewed images with varying visual clarity and were asked to categorize them as faces or houses, and rate the certainty of their decisions, while we used eye tracking to monitor fixations, saccades, blinks, and pupil diameter. Results demonstrated that decision certainty influenced several oculomotor variables, including fixation frequency and duration, the frequency, peak velocity, and amplitude of saccades, and phasic pupil diameter. Whereas most measures tended to change linearly along with decision certainty, pupil diameter revealed more nuanced and dynamic information about the time course of perceptual decision making. Together, results demonstrate robust alterations in eye movement behavior as a function of decision certainty and attention demands, and suggest that monitoring oculomotor variables during applied task performance may prove valuable for identifying and remediating transient states of uncertainty. Published by Elsevier B.V.

  5. Constraints on Multiple Object Tracking in Williams Syndrome: How Atypical Development Can Inform Theories of Visual Processing

    ERIC Educational Resources Information Center

    Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara

    2016-01-01

    The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…

  6. Bird Radar Validation in the Field by Time-Referencing Line-Transect Surveys

    PubMed Central

    Dokter, Adriaan M.; Baptist, Martin J.; Ens, Bruno J.; Krijgsveld, Karen L.; van Loon, E. Emiel

    2013-01-01

    Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar’s detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer’s accuracy in determining a bird’s transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ∼1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50±0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms. PMID:24066103

  7. Bird radar validation in the field by time-referencing line-transect surveys.

    PubMed

    Dokter, Adriaan M; Baptist, Martin J; Ens, Bruno J; Krijgsveld, Karen L; van Loon, E Emiel

    2013-01-01

    Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar's detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer's accuracy in determining a bird's transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ~1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50 ± 0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms.

  8. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging - techniques and applications.

    PubMed

    Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V

    2014-09-01

    Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Object tracking based on harmony search: comparative study

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; He, Xiao-Hai; Luo, Dai-Sheng; Yu, Yan-Mei

    2012-10-01

    Visual tracking can be treated as an optimization problem. A new meta-heuristic optimal algorithm, Harmony Search (HS), was first applied to perform visual tracking by Fourie et al. As the authors point out, many subjects are still required in ongoing research. Our work is a continuation of Fourie's study, with four prominent improved variations of HS, namely Improved Harmony Search (IHS), Global-best Harmony Search (GHS), Self-adaptive Harmony Search (SHS) and Differential Harmony Search (DHS) adopted into the tracking system. Their performances are tested and analyzed on multiple challenging video sequences. Experimental results show that IHS is best, with DHS ranking second among the four improved trackers when the iteration number is small. However, the differences between all four reduced gradually, along with the increasing number of iterations.

  10. Visual Target Tracking on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Biesiadecki, Jeffrey J.; Ali, Khaled S.

    2008-01-01

    Visual Target Tracking (VTT) has been implemented in the new Mars Exploration Rover (MER) Flight Software (FSW) R9.2 release, which is now running on both Spirit and Opportunity rovers. Applying the normalized cross-correlation (NCC) algorithm with template image magnification and roll compensation on MER Navcam images, VTT tracks the target and enables the rover to approach the target within a few cm over a 10 m traverse. Each VTT update takes 1/2 to 1 minute on the rovers, 2-3 times faster than one Visual Odometry (Visodom) update. VTT is a key element to achieve a target approach and instrument placement over a 10-m run in a single sol in contrast to the original baseline of 3 sols. VTT has been integrated into the MER FSW so that it can operate with any combination of blind driving, Autonomous Navigation (Autonav) with hazard avoidance, and Visodom. VTT can either guide the rover towards the target or simply image the target as the rover drives by. Three recent VTT operational checkouts on Opportunity were all successful, tracking the selected target reliably within a few pixels.

  11. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  12. Aerial video mosaicking using binary feature tracking

    NASA Astrophysics Data System (ADS)

    Minnehan, Breton; Savakis, Andreas

    2015-05-01

    Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point's image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.

  13. Fast and Accurate Cell Tracking by a Novel Optical-Digital Hybrid Method

    NASA Astrophysics Data System (ADS)

    Torres-Cisneros, M.; Aviña-Cervantes, J. G.; Pérez-Careta, E.; Ambriz-Colín, F.; Tinoco, Verónica; Ibarra-Manzano, O. G.; Plascencia-Mora, H.; Aguilera-Gómez, E.; Ibarra-Manzano, M. A.; Guzman-Cabrera, R.; Debeir, Olivier; Sánchez-Mondragón, J. J.

    2013-09-01

    An innovative methodology to detect and track cells using microscope images enhanced by optical cross-correlation techniques is proposed in this paper. In order to increase the tracking sensibility, image pre-processing has been implemented as a morphological operator on the microscope image. Results show that the pre-processing process allows for additional frames of cell tracking, therefore increasing its robustness. The proposed methodology can be used in analyzing different problems such as mitosis, cell collisions, and cell overlapping, ultimately designed to identify and treat illnesses and malignancies.

  14. To speak or not to speak - A multiple resource perspective

    NASA Technical Reports Server (NTRS)

    Tsang, P. S.; Hartzell, E. J.; Rothschild, R. A.

    1985-01-01

    The desirability of employing speech response in a dynamic dual task situation was discussed from a multiple resource perspective. A secondary task technique was employed to examine the time-sharing performance of five dual tasks with various degrees of resource overlap according to the structure-specific resource model of Wickens (1980). The primary task was a visual/manual tracking task which required spatial processing. The secondary task was either another tracking task or a spatial transformation task with one of four input (visual or auditory) and output (manual or speech) configurations. The results show that the dual task performance was best when the primary tracking task was paired with the visual/speech transformation task. This finding was explained by an interaction of the stimulus-central processing-response compatibility of the transformation task and the degree of resource competition between the time-shared tasks. Implications on the utility of speech response were discussed.

  15. User-assisted video segmentation system for visual communication

    NASA Astrophysics Data System (ADS)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  16. Universal Ontology: Attentive Tracking of Objects and Substances across Languages and over Development

    ERIC Educational Resources Information Center

    Cacchione, Trix; Indino, Marcello; Fujita, Kazuo; Itakura, Shoji; Matsuno, Toyomi; Schaub, Simone; Amici, Federica

    2014-01-01

    Previous research has demonstrated that adults are successful at visually tracking rigidly moving items, but experience great difficulties when tracking substance-like "pouring" items. Using a comparative approach, we investigated whether the presence/absence of the grammatical count-mass distinction influences adults and children's…

  17. Covert enaction at work: Recording the continuous movements of visuospatial attention to visible or imagined targets by means of Steady-State Visual Evoked Potentials (SSVEPs).

    PubMed

    Gregori Grgič, Regina; Calore, Enrico; de'Sperati, Claudio

    2016-01-01

    Whereas overt visuospatial attention is customarily measured with eye tracking, covert attention is assessed by various methods. Here we exploited Steady-State Visual Evoked Potentials (SSVEPs) - the oscillatory responses of the visual cortex to incoming flickering stimuli - to record the movements of covert visuospatial attention in a way operatively similar to eye tracking (attention tracking), which allowed us to compare motion observation and motion extrapolation with and without eye movements. Observers fixated a central dot and covertly tracked a target oscillating horizontally and sinusoidally. In the background, the left and the right halves of the screen flickered at two different frequencies, generating two SSVEPs in occipital regions whose size varied reciprocally as observers attended to the moving target. The two signals were combined into a single quantity that was modulated at the target frequency in a quasi-sinusoidal way, often clearly visible in single trials. The modulation continued almost unchanged when the target was switched off and observers mentally extrapolated its motion in imagery, and also when observers pointed their finger at the moving target during covert tracking, or imagined doing so. The amplitude of modulation during covert tracking was ∼25-30% of that measured when observers followed the target with their eyes. We used 4 electrodes in parieto-occipital areas, but similar results were achieved with a single electrode in Oz. In a second experiment we tested ramp and step motion. During overt tracking, SSVEPs were remarkably accurate, showing both saccadic-like and smooth pursuit-like modulations of cortical responsiveness, although during covert tracking the modulation deteriorated. Covert tracking was better with sinusoidal motion than ramp motion, and better with moving targets than stationary ones. The clear modulation of cortical responsiveness recorded during both overt and covert tracking, identical for motion observation and motion extrapolation, suggests to include covert attention movements in enactive theories of mental imagery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. STED microscopy visualizes energy deposition of single ions in a solid-state detector beyond diffraction limit

    NASA Astrophysics Data System (ADS)

    Niklas, M.; Henrich, M.; Jäkel, O.; Engelhardt, J.; Abdollahi, A.; Greilich, S.

    2017-05-01

    Fluorescent nuclear track detectors (FNTDs) allow for visualization of single-particle traversal in clinical ion beams. The point spread function of the confocal readout has so far hindered a more detailed characterization of the track spots—the ion’s characteristic signature left in the FNTD. Here we report on the readout of the FNTD by optical nanoscopy, namely stimulated emission depletion microscopy. It was firstly possible to visualize the track spots of carbon ions and protons beyond the diffraction limit of conventional light microscopy with a resolving power of approximately 80 nm (confocal: 320 nm). A clear discrimination of the spatial width, defined by the full width half maximum of track spots from particles (proton and carbon ions), with a linear energy transfer (LET) ranging from approximately 2-1016 keV µm-1 was possible. Results suggest that the width depends on LET but not on particle charge within the uncertainties. A discrimination of particle type by width thus does not seem possible (as well as with confocal microscopy). The increased resolution, however, could allow for refined determination of the cross-sectional area facing substantial energy deposition. This work could pave the way towards development of optical nanoscopy-based analysis of radiation-induced cellular response using cell-fluorescent ion track hybrid detectors.

  19. On analyzing colour constancy approach for improving SURF detector performance

    NASA Astrophysics Data System (ADS)

    Zulkiey, Mohd Asyraf; Zaki, Wan Mimi Diyana Wan; Hussain, Aini; Mustafa, Mohd. Marzuki

    2012-04-01

    Robust key point detector plays a crucial role in obtaining a good tracking feature. The main challenge in outdoor tracking is the illumination change due to various reasons such as weather fluctuation and occlusion. This paper approaches the illumination change problem by transforming the input image through colour constancy algorithm before applying the SURF detector. Masked grey world approach is chosen because of its ability to perform well under local as well as global illumination change. Every image is transformed to imitate the canonical illuminant and Gaussian distribution is used to model the global change. The simulation results show that the average number of detected key points have increased by 69.92%. Moreover, the average of improved performance cases far out weight the degradation case where the former is improved by 215.23%. The approach is suitable for tracking implementation where sudden illumination occurs frequently and robust key point detection is needed.

  20. Reconfigurable Flight Control Design using a Robust Servo LQR and Radial Basis Function Neural Networks

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    2005-01-01

    This viewgraph presentation reviews the use of a Robust Servo Linear Quadratic Regulator (LQR) and a Radial Basis Function (RBF) Neural Network in reconfigurable flight control designs in adaptation to a aircraft part failure. The method uses a robust LQR servomechanism design with model Reference adaptive control, and RBF neural networks. During the failure the LQR servomechanism behaved well, and using the neural networks improved the tracking.

  1. Robust optimization based energy dispatch in smart grids considering demand uncertainty

    NASA Astrophysics Data System (ADS)

    Nassourou, M.; Puig, V.; Blesa, J.

    2017-01-01

    In this study we discuss the application of robust optimization to the problem of economic energy dispatch in smart grids. Robust optimization based MPC strategies for tackling uncertain load demands are developed. Unexpected additive disturbances are modelled by defining an affine dependence between the control inputs and the uncertain load demands. The developed strategies were applied to a hybrid power system connected to an electrical power grid. Furthermore, to demonstrate the superiority of the standard Economic MPC over the MPC tracking, a comparison (e.g average daily cost) between the standard MPC tracking, the standard Economic MPC, and the integration of both in one-layer and two-layer approaches was carried out. The goal of this research is to design a controller based on Economic MPC strategies, that tackles uncertainties, in order to minimise economic costs and guarantee service reliability of the system.

  2. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex

    PubMed Central

    Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272

  3. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  4. Track and vertex reconstruction: From classical to adaptive methods

    NASA Astrophysics Data System (ADS)

    Strandlie, Are; Frühwirth, Rudolf

    2010-04-01

    This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.

  5. Extended state observer based robust adaptive control on SE(3) for coupled spacecraft tracking maneuver with actuator saturation and misalignment

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqiao; Ye, Dong; Sun, Zhaowei; Liu, Chuang

    2018-02-01

    This paper presents a robust adaptive controller integrated with an extended state observer (ESO) to solve coupled spacecraft tracking maneuver in the presence of model uncertainties, external disturbances, actuator uncertainties including magnitude deviation and misalignment, and even actuator saturation. More specifically, employing the exponential coordinates on the Lie group SE(3) to describe configuration tracking errors, the coupled six-degrees-of-freedom (6-DOF) dynamics are developed for spacecraft relative motion, in which a generic fully actuated thruster distribution is considered and the lumped disturbances are reconstructed by using anti-windup technique. Then, a novel ESO, developed via second order sliding mode (SOSM) technique and adding linear correction terms to improve the performance, is designed firstly to estimate the disturbances in finite time. Based on the estimated information, an adaptive fast terminal sliding mode (AFTSM) controller is developed to guarantee the almost global asymptotic stability of the resulting closed-loop system such that the trajectory can be tracked with all the aforementioned drawbacks addressed simultaneously. Finally, the effectiveness of the controller is illustrated through numerical examples.

  6. Reading the Rover Tracks

    NASA Image and Video Library

    2012-08-29

    The straight lines in Curiosity zigzag track marks are Morse code for JPL. The footprint is an important reference mark that the rover can use to drive more precisely via a system called visual odometry.

  7. The First Rapid Assessment of Avoidable Blindness (RAAB) in Thailand

    PubMed Central

    Isipradit, Saichin; Sirimaharaj, Maytinee; Charukamnoetkanok, Puwat; Thonginnetra, Oraorn; Wongsawad, Warapat; Sathornsumetee, Busaba; Somboonthanakij, Sudawadee; Soomsawasdi, Piriya; Jitawatanarat, Umapond; Taweebanjongsin, Wongsiri; Arayangkoon, Eakkachai; Arame, Punyawee; Kobkoonthon, Chinsuchee; Pangputhipong, Pannet

    2014-01-01

    Background The majority of vision loss is preventable or treatable. Population surveys are crucial for planning, implementation, and monitoring policies and interventions to eliminate avoidable blindness and visual impairments. This is the first rapid assessment of avoidable blindness (RAAB) study in Thailand. Methods A cross-sectional study of a population in Thailand age 50 years old or over aimed to assess the prevalence and causes of blindness and visual impairments. Using the Thailand National Census 2010 as the sampling frame, a stratified four-stage cluster sampling based on a probability proportional to size was conducted in 176 enumeration areas from 11 provinces. Participants received comprehensive eye examination by ophthalmologists. Results The age and sex adjusted prevalence of blindness (presenting visual acuity (VA) <20/400), severe visual impairment (VA <20/200 but ≥20/400), and moderate visual impairment (VA <20/70 but ≥20/200) were 0.6% (95% CI: 0.5–0.8), 1.3% (95% CI: 1.0–1.6), 12.6% (95% CI: 10.8–14.5). There was no significant difference among the four regions of Thailand. Cataract was the main cause of vision loss accounted for 69.7% of blindness. Cataract surgical coverage in persons was 95.1% for cut off VA of 20/400. Refractive errors, diabetic retinopathy, glaucoma, and corneal opacities were responsible for 6.0%, 5.1%, 4.0%, and 2.0% of blindness respectively. Conclusion Thailand is on track to achieve the goal of VISION 2020. However, there is still much room for improvement. Policy refinements and innovative interventions are recommended to alleviate blindness and visual impairments especially regarding the backlog of blinding cataract, management of non-communicative, chronic, age-related eye diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy, prevention of childhood blindness, and establishment of a robust eye health information system. PMID:25502762

  8. Fast Compressive Tracking.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  9. Design and implementation of a remote UAV-based mobile health monitoring system

    NASA Astrophysics Data System (ADS)

    Li, Songwei; Wan, Yan; Fu, Shengli; Liu, Mushuang; Wu, H. Felix

    2017-04-01

    Unmanned aerial vehicles (UAVs) play increasing roles in structure health monitoring. With growing mobility in modern Internet-of-Things (IoT) applications, the health monitoring of mobile structures becomes an emerging application. In this paper, we develop a UAV-carried vision-based monitoring system that allows a UAV to continuously track and monitor a mobile infrastructure and transmit back the monitoring information in real- time from a remote location. The monitoring system uses a simple UAV-mounted camera and requires only a single feature located on the mobile infrastructure for target detection and tracking. The computation-effective vision-based tracking solution based on a single feature is an improvement over existing vision-based lead-follower tracking systems that either have poor tracking performance due to the use of a single feature, or have improved tracking performance at a cost of the usage of multiple features. In addition, a UAV-carried aerial networking infrastructure using directional antennas is used to enable robust real-time transmission of monitoring video streams over a long distance. Automatic heading control is used to self-align headings of directional antennas to enable robust communication in mobility. Compared to existing omni-communication systems, the directional communication solution significantly increases the operation range of remote monitoring systems. In this paper, we develop the integrated modeling framework of camera and mobile platforms, design the tracking algorithm, develop a testbed of UAVs and mobile platforms, and evaluate system performance through both simulation studies and field tests.

  10. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    NASA Astrophysics Data System (ADS)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  11. WE-AB-303-08: Direct Lung Tumor Tracking Using Short Imaging Arcs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, C; Huang, C; Keall, P

    2015-06-15

    Purpose: Most current tumor tracking technologies rely on implanted markers, which suffer from potential toxicity of marker placement and mis-targeting due to marker migration. Several markerless tracking methods have been proposed: these are either indirect methods or have difficulties tracking lung tumors in most clinical cases due to overlapping anatomies in 2D projection images. We propose a direct lung tumor tracking algorithm robust to overlapping anatomies using short imaging arcs. Methods: The proposed algorithm tracks the tumor based on kV projections acquired within the latest six-degree imaging arc. To account for respiratory motion, an external motion surrogate is used tomore » select projections of the same phase within the latest arc. For each arc, the pre-treatment 4D cone-beam CT (CBCT) with tumor contours are used to estimate and remove the contribution to the integral attenuation from surrounding anatomies. The position of the tumor model extracted from 4D CBCT of the same phase is then optimized to match the processed projections using the conjugate gradient method. The algorithm was retrospectively validated on two kV scans of a lung cancer patient with implanted fiducial markers. This patient was selected as the tumor is attached to the mediastinum, representing a challenging case for markerless tracking methods. The tracking results were converted to expected marker positions and compared with marker trajectories obtained via direct marker segmentation (ground truth). Results: The root-mean-squared-errors of tracking were 0.8 mm and 0.9 mm in the superior-inferior direction for the two scans. Tracking error was found to be below 2 and 3 mm for 90% and 98% of the time, respectively. Conclusions: A direct lung tumor tracking algorithm robust to overlapping anatomies was proposed and validated on two scans of a lung cancer patient. Sub-millimeter tracking accuracy was observed, indicating the potential of this algorithm for real-time guidance applications.« less

  12. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    NASA Astrophysics Data System (ADS)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  13. Measuring advertising effectiveness in Travel 2.0 websites through eye-tracking technology.

    PubMed

    Muñoz-Leiva, Francisco; Hernández-Méndez, Janet; Gómez-Carmona, Diego

    2018-03-06

    The advent of Web 2.0 is changing tourists' behaviors, prompting them to take on a more active role in preparing their travel plans. It is also leading tourism companies to have to adapt their marketing strategies to different online social media. The present study analyzes advertising effectiveness in social media in terms of customers' visual attention and self-reported memory (recall). Data were collected through a within-subjects and between-groups design based on eye-tracking technology, followed by a self-administered questionnaire. Participants were instructed to visit three Travel 2.0 websites (T2W), including a hotel's blog, social network profile (Facebook), and virtual community profile (Tripadvisor). Overall, the results revealed greater advertising effectiveness in the case of the hotel social network; and visual attention measures based on eye-tracking data differed from measures of self-reported recall. Visual attention to the ad banner was paid at a low level of awareness, which explains why the associations with the ad did not activate its subsequent recall. The paper offers a pioneering attempt in the application of eye-tracking technology, and examines the possible impact of visual marketing stimuli on user T2W-related behavior. The practical implications identified in this research, along with its limitations and future research opportunities, are of interest both for further theoretical development and practical application. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Infrared dim and small target detecting and tracking method inspired by Human Visual System

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian

    2014-01-01

    Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.

  15. The artificial retina for track reconstruction at the LHC crossing rate

    NASA Astrophysics Data System (ADS)

    Abba, A.; Bedeschi, F.; Citterio, M.; Caponio, F.; Cusimano, A.; Geraci, A.; Marino, P.; Morello, M. J.; Neri, N.; Punzi, G.; Piucci, A.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.

    2016-04-01

    We present the results of an R&D study for a specialized processor capable of precisely reconstructing events with hundreds of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus suitable for processing LHC events at the full crossing frequency. For this purpose we design and test a massively parallel pattern-recognition algorithm, inspired to the current understanding of the mechanisms adopted by the primary visual cortex of mammals in the early stages of visual-information processing. The detailed geometry and charged-particle's activity of a large tracking detector are simulated and used to assess the performance of the artificial retina algorithm. We find that high-quality tracking in large detectors is possible with sub-microsecond latencies when the algorithm is implemented in modern, high-speed, high-bandwidth FPGA devices.

  16. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

  17. Mars @ ASDC

    NASA Astrophysics Data System (ADS)

    Carraro, Francesco

    "Mars @ ASDC" is a project born with the goal of using the new web technologies to assist researches involved in the study of Mars. This project employs Mars map and javascript APIs provided by Google to visualize data acquired by space missions on the planet. So far, visualization of tracks acquired by MARSIS and regions observed by VIRTIS-Rosetta has been implemented. The main reason for the creation of this kind of tool is the difficulty in handling hundreds or thousands of acquisitions, like the ones from MARSIS, and the consequent difficulty in finding observations related to a particular region. This led to the development of a tool which allows to search for acquisitions either by defining the region of interest through a set of geometrical parameters or by manually selecting the region on the map through a few mouse clicks The system allows the visualization of tracks (acquired by MARSIS) or regions (acquired by VIRTIS-Rosetta) which intersect the user defined region. MARSIS tracks can be visualized both in Mercator and polar projections while the regions observed by VIRTIS can presently be visualized only in Mercator projection. The Mercator projection is the standard map provided by Google. The polar projections are provided by NASA and have been developed to be used in combination with APIs provided by Google The whole project has been developed following the "open source" philosophy: the client-side code which handles the functioning of the web page is written in javascript; the server-side code which executes the searches for tracks or regions is written in PHP and the DB which undergoes the system is MySQL.

  18. An Improved Strong Tracking Cubature Kalman Filter for GPS/INS Integrated Navigation Systems.

    PubMed

    Feng, Kaiqiang; Li, Jie; Zhang, Xi; Zhang, Xiaoming; Shen, Chong; Cao, Huiliang; Yang, Yanyu; Liu, Jun

    2018-06-12

    The cubature Kalman filter (CKF) is widely used in the application of GPS/INS integrated navigation systems. However, its performance may decline in accuracy and even diverge in the presence of process uncertainties. To solve the problem, a new algorithm named improved strong tracking seventh-degree spherical simplex-radial cubature Kalman filter (IST-7thSSRCKF) is proposed in this paper. In the proposed algorithm, the effect of process uncertainty is mitigated by using the improved strong tracking Kalman filter technique, in which the hypothesis testing method is adopted to identify the process uncertainty and the prior state estimate covariance in the CKF is further modified online according to the change in vehicle dynamics. In addition, a new seventh-degree spherical simplex-radial rule is employed to further improve the estimation accuracy of the strong tracking cubature Kalman filter. In this way, the proposed comprehensive algorithm integrates the advantage of 7thSSRCKF’s high accuracy and strong tracking filter’s strong robustness against process uncertainties. The GPS/INS integrated navigation problem with significant dynamic model errors is utilized to validate the performance of proposed IST-7thSSRCKF. Results demonstrate that the improved strong tracking cubature Kalman filter can achieve higher accuracy than the existing CKF and ST-CKF, and is more robust for the GPS/INS integrated navigation system.

  19. Adaptive Tracking Control for Robots With an Interneural Computing Scheme.

    PubMed

    Tsai, Feng-Sheng; Hsu, Sheng-Yi; Shih, Mau-Hsiang

    2018-04-01

    Adaptive tracking control of mobile robots requires the ability to follow a trajectory generated by a moving target. The conventional analysis of adaptive tracking uses energy minimization to study the convergence and robustness of the tracking error when the mobile robot follows a desired trajectory. However, in the case that the moving target generates trajectories with uncertainties, a common Lyapunov-like function for energy minimization may be extremely difficult to determine. Here, to solve the adaptive tracking problem with uncertainties, we wish to implement an interneural computing scheme in the design of a mobile robot for behavior-based navigation. The behavior-based navigation adopts an adaptive plan of behavior patterns learning from the uncertainties of the environment. The characteristic feature of the interneural computing scheme is the use of neural path pruning with rewards and punishment interacting with the environment. On this basis, the mobile robot can be exploited to change its coupling weights in paths of neural connections systematically, which can then inhibit or enhance the effect of flow elimination in the dynamics of the evolutionary neural network. Such dynamical flow translation ultimately leads to robust sensory-to-motor transformations adapting to the uncertainties of the environment. A simulation result shows that the mobile robot with the interneural computing scheme can perform fault-tolerant behavior of tracking by maintaining suitable behavior patterns at high frequency levels.

  20. Dual-Tasking Alleviated Sleep Deprivation Disruption in Visuomotor Tracking: An fMRI Study

    ERIC Educational Resources Information Center

    Gazes, Yunglin; Rakitin, Brian C.; Steffener, Jason; Habeck, Christian; Butterfield, Brady; Basner, Robert C.; Ghez, Claude; Stern, Yaakov

    2012-01-01

    Effects of dual-responding on tracking performance after 49-h of sleep deprivation (SD) were evaluated behaviorally and with functional magnetic resonance imaging (fMRI). Continuous visuomotor tracking was performed simultaneously with an intermittent color-matching visual detection task in which a pair of color-matched stimuli constituted a…

  1. Prior Knowledge and Online Inquiry-Based Science Reading: Evidence from Eye Tracking

    ERIC Educational Resources Information Center

    Ho, Hsin Ning Jessie; Tsai, Meng-Jung; Wang, Ching-Yeh; Tsai, Chin-Chung

    2014-01-01

    This study employed eye-tracking technology to examine how students with different levels of prior knowledge process text and data diagrams when reading a web-based scientific report. Students' visual behaviors were tracked and recorded when they read a report demonstrating the relationship between the greenhouse effect and global climate…

  2. Use of Cognitive and Metacognitive Strategies in Online Search: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Zhou, Mingming; Ren, Jing

    2016-01-01

    This study used eye-tracking technology to track students' eye movements while searching information on the web. The research question guiding this study was "Do students with different search performance levels have different visual attention distributions while searching information online? If yes, what are the patterns for high and low…

  3. High-performance object tracking and fixation with an online neural estimator.

    PubMed

    Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian

    2007-02-01

    Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.

  4. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds

    PubMed Central

    Howe, Piers D. L.

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources. PMID:28410383

  5. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds.

    PubMed

    Lapierre, Mark D; Cropper, Simon J; Howe, Piers D L

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources.

  6. A Scalable Distributed Approach to Mobile Robot Vision

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.

    1997-01-01

    This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).

  7. Oculometric Assessment of Dynamic Visual Processing

    NASA Technical Reports Server (NTRS)

    Liston, Dorion Bryce; Stone, Lee

    2014-01-01

    Eye movements are the most frequent (3 per second), shortest-latency (150-250 ms), and biomechanically simplest (1 joint, no inertial complexities) voluntary motor behavior in primates, providing a model system to assess sensorimotor disturbances arising from trauma, fatigue, aging, or disease states (e.g., Diefendorf and Dodge, 1908). We developed a 15-minute behavioral tracking protocol consisting of randomized stepramp radial target motion to assess several aspects of the behavioral response to dynamic visual motion, including pursuit initiation, steadystate tracking, direction-tuning, and speed-tuning thresholds. This set of oculomotor metrics provide valid and reliable measures of dynamic visual performance (Stone and Krauzlis, 2003; Krukowski and Stone, 2005; Stone et al, 2009; Liston and Stone, 2014), and may prove to be a useful assessment tool for functional impairments of dynamic visual processing.

  8. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    PubMed Central

    Qin, Lei; Snoussi, Hichem; Abdallah, Fahed

    2014-01-01

    We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883

  9. Method Matters: Systematic Effects of Testing Procedure on Visual Working Memory Sensitivity

    ERIC Educational Resources Information Center

    Makovski, Tal; Watson, Leah M.; Koutstaal, Wilma; Jiang, Yuhong V.

    2010-01-01

    Visual working memory (WM) is traditionally considered a robust form of visual representation that survives changes in object motion, observer's position, and other visual transients. This article presents data that are inconsistent with the traditional view. We show that memory sensitivity is dramatically influenced by small variations in the…

  10. A robust H∞ control-based hierarchical mode transition control system for plug-in hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Jiao, Xiaohong; Li, Liang; Zhang, Yuanbo; Chen, Zheng

    2018-01-01

    To realize a fast and smooth operating mode transition process from electric driving mode to engine-on driving mode, this paper presents a novel robust hierarchical mode transition control method for a plug-in hybrid electric bus (PHEB) with pre-transmission parallel hybrid powertrain. Firstly, the mode transition process is divided into five stages to clearly describe the powertrain dynamics. Based on the dynamics models of powertrain and clutch actuating mechanism, a hierarchical control structure including two robust H∞ controllers in both upper layer and lower layer is proposed. In upper layer, the demand clutch torque can be calculated by a robust H∞controller considering the clutch engaging time and the vehicle jerk. While in lower layer a robust tracking controller with L2-gain is designed to perform the accurate position tracking control, especially when the parameters uncertainties and external disturbance occur in the clutch actuating mechanism. Simulation and hardware-in-the-loop (HIL) test are carried out in a traditional driving condition of PHEB. Results show that the proposed hierarchical control approach can obtain the good control performance: mode transition time is greatly reduced with the acceptable jerk. Meanwhile, the designed control system shows the obvious robustness with the uncertain parameters and disturbance. Therefore, the proposed approach may offer a theoretical reference for the actual vehicle controller.

  11. Tracking single particle rotation: Probing dynamics in four dimensions

    DOE PAGES

    Anthony, Stephen Michael; Yu, Yan

    2015-04-29

    Direct visualization and tracking of small particles at high spatial and temporal resolution provides a powerful approach to probing complex dynamics and interactions in chemical and biological processes. Analysis of the rotational dynamics of particles adds a new dimension of information that is otherwise impossible to obtain with conventional 3-D particle tracking. In this review, we survey recent advances in single-particle rotational tracking, with highlights on the rotational tracking of optically anisotropic Janus particles. Furthermore, strengths and weaknesses of the various particle tracking methods, and their applications are discussed.

  12. Visualization of frequency-modulated electric field based on photonic frequency tracking in asynchronous electro-optic measurement system

    NASA Astrophysics Data System (ADS)

    Hisatake, Shintaro; Yamaguchi, Koki; Uchida, Hirohisa; Tojyo, Makoto; Oikawa, Yoichi; Miyaji, Kunio; Nagatsuma, Tadao

    2018-04-01

    We propose a new asynchronous measurement system to visualize the amplitude and phase distribution of a frequency-modulated electromagnetic wave. The system consists of three parts: a nonpolarimetric electro-optic frequency down-conversion part, a phase-noise-canceling part, and a frequency-tracking part. The photonic local oscillator signal generated by electro-optic phase modulation is controlled to track the frequency of the radio frequency (RF) signal to significantly enhance the measurable RF bandwidth. We demonstrate amplitude and phase measurement of a quasi-millimeter-wave frequency-modulated continuous-wave signal (24 GHz ± 80 MHz with a 2.5 ms period) as a proof-of-concept experiment.

  13. Map display design

    NASA Technical Reports Server (NTRS)

    Aretz, Anthony J.

    1990-01-01

    This paper presents a cognitive model of a pilot's navigation task and describes an experiment comparing a visual momentum map display to the traditional track-up and north-up approaches. The data show the advantage to a track-up map is its congruence with the ego-centered forward view; however, the development of survey knowledge is hindered by the inconsistency of the rotating display. The stable alignment of a north-up map aids the acquisition of survey knowledge, but there is a cost associated with the mental rotation of the display to a track-up alignment for ego-centered tasks. The results also show that visual momentum can be used to reduce the mental rotation costs of a north-up display.

  14. Robot tracking system improvements and visual calibration of orbiter position for radiator inspection

    NASA Technical Reports Server (NTRS)

    Tonkay, Gregory

    1990-01-01

    The following separate topics are addressed: (1) improving a robotic tracking system; and (2) providing insights into orbiter position calibration for radiator inspection. The objective of the tracking system project was to provide the capability to track moving targets more accurately by adjusting parameters in the control system and implementing a predictive algorithm. A computer model was developed to emulate the tracking system. Using this model as a test bed, a self-tuning algorithm was developed to tune the system gains. The model yielded important findings concerning factors that affect the gains. The self-tuning algorithms will provide the concepts to write a program to automatically tune the gains in the real system. The section concerning orbiter position calibration provides a comparison to previous work that had been performed for plant growth. It provided the conceptualized routines required to visually determine the orbiter position and orientation. Furthermore, it identified the types of information which are required to flow between the robot controller and the vision system.

  15. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    ERIC Educational Resources Information Center

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  16. Accurate segmentation framework for the left ventricle wall from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Sliman, H.; Khalifa, F.; Elnakib, A.; Soliman, A.; Beache, G. M.; Gimel'farb, G.; Emam, A.; Elmaghraby, A.; El-Baz, A.

    2013-10-01

    We propose a novel, fast, robust, bi-directional coupled parametric deformable model to segment the left ventricle (LV) wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of LV wall to track the evolution of the parametric deformable models control points. To accurately estimate the marginal density of each deformable model control point, the empirical marginal grey level distributions (first-order appearance) inside and outside the boundary of the deformable model are modeled with adaptive linear combinations of discrete Gaussians (LCDG). The second order visual appearance of the LV wall is accurately modeled with a new rotationally invariant second-order Markov-Gibbs random field (MGRF). We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and AD value of 2.16±0.60 compared to two other level set methods that achieve 0.904±0.033 and 0.885±0.02 for DSC; and 2.86±1.35 and 5.72±4.70 for AD, respectively.

  17. Photogrammetric point cloud compression for tactical networks

    NASA Astrophysics Data System (ADS)

    Madison, Andrew C.; Massaro, Richard D.; Wayant, Clayton D.; Anderson, John E.; Smith, Clint B.

    2017-05-01

    We report progress toward the development of a compression schema suitable for use in the Army's Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.

  18. Zebrafish response to a robotic replica in three dimensions

    PubMed Central

    Ruberto, Tommaso; Mwaffo, Violet; Singh, Sukhgewanpreet; Neri, Daniele

    2016-01-01

    As zebrafish emerge as a species of choice for the investigation of biological processes, a number of experimental protocols are being developed to study their social behaviour. While live stimuli may elicit varying response in focal subjects owing to idiosyncrasies, tiredness and circadian rhythms, video stimuli suffer from the absence of physical input and rely only on two-dimensional projections. Robotics has been recently proposed as an alternative approach to generate physical, customizable, effective and consistent stimuli for behavioural phenotyping. Here, we contribute to this field of investigation through a novel four-degree-of-freedom robotics-based platform to manoeuvre a biologically inspired three-dimensionally printed replica. The platform enables three-dimensional motions as well as body oscillations to mimic zebrafish locomotion. In a series of experiments, we demonstrate the differential role of the visual stimuli associated with the biologically inspired replica and its three-dimensional motion. Three-dimensional tracking and information-theoretic tools are complemented to quantify the interaction between zebrafish and the robotic stimulus. Live subjects displayed a robust attraction towards the moving replica, and such attraction was lost when controlling for its visual appearance or motion. This effort is expected to aid zebrafish behavioural phenotyping, by offering a novel approach to generate physical stimuli moving in three dimensions. PMID:27853566

  19. A novel method for automated tracking and quantification of adult zebrafish behaviour during anxiety.

    PubMed

    Nema, Shubham; Hasan, Whidul; Bhargava, Anamika; Bhargava, Yogesh

    2016-09-15

    Behavioural neuroscience relies on software driven methods for behavioural assessment, but the field lacks cost-effective, robust, open source software for behavioural analysis. Here we propose a novel method which we called as ZebraTrack. It includes cost-effective imaging setup for distraction-free behavioural acquisition, automated tracking using open-source ImageJ software and workflow for extraction of behavioural endpoints. Our ImageJ algorithm is capable of providing control to users at key steps while maintaining automation in tracking without the need for the installation of external plugins. We have validated this method by testing novelty induced anxiety behaviour in adult zebrafish. Our results, in agreement with established findings, showed that during state-anxiety, zebrafish showed reduced distance travelled, increased thigmotaxis and freezing events. Furthermore, we proposed a method to represent both spatial and temporal distribution of choice-based behaviour which is currently not possible to represent using simple videograms. ZebraTrack method is simple and economical, yet robust enough to give results comparable with those obtained from costly proprietary software like Ethovision XT. We have developed and validated a novel cost-effective method for behavioural analysis of adult zebrafish using open-source ImageJ software. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Feature point based 3D tracking of multiple fish from multi-view images

    PubMed Central

    Qian, Zhi-Ming

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966

  1. Feature point based 3D tracking of multiple fish from multi-view images.

    PubMed

    Qian, Zhi-Ming; Chen, Yan Qiu

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.

  2. Embodied learning of a generative neural model for biological motion perception and inference

    PubMed Central

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V.

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons. PMID:26217215

  3. Parsing heterogeneity in autism spectrum disorders: visual scanning of dynamic social scenes in school-aged children.

    PubMed

    Rice, Katherine; Moriuchi, Jennifer M; Jones, Warren; Klin, Ami

    2012-03-01

    To examine patterns of variability in social visual engagement and their relationship to standardized measures of social disability in a heterogeneous sample of school-aged children with autism spectrum disorders (ASD). Eye-tracking measures of visual fixation during free-viewing of dynamic social scenes were obtained for 109 children with ASD (mean age, 10.2 ± 3.2 years), 37 of whom were matched with 26 typically-developing (TD) children (mean age, 9.5 ± 2.2 years) on gender, age, and IQ. The smaller subset allowed between-group comparisons, whereas the larger group was used for within-group examinations of ASD heterogeneity. Between-group comparisons revealed significantly attenuated orientation to socially salient aspects of the scenes, with the largest effect size (Cohen's d = 1.5) obtained for reduced fixation on faces. Within-group analyses revealed a robust association between higher fixation on the inanimate environment and greater social disability. However, the associations between fixation on the eyes and mouth and social adaptation varied greatly, even reversing, when comparing different cognitive profile subgroups. Although patterns of social visual engagement with naturalistic social stimuli are profoundly altered in children with ASD, the social adaptivity of these behaviors varies for different groups of children. This variation likely represents different patterns of adaptation and maladaptation that should be traced longitudinally to the first years of life, before complex interactions between early predispositions and compensatory learning take place. We propose that variability in these early mechanisms of socialization may serve as proximal behavioral manifestations of genetic vulnerabilities. Copyright © 2012 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. Embodied learning of a generative neural model for biological motion perception and inference.

    PubMed

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  5. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  6. Using Eye Tracking to Explore Consumers' Visual Behavior According to Their Shopping Motivation in Mobile Environments.

    PubMed

    Hwang, Yoon Min; Lee, Kun Chang

    2017-07-01

    Despite a strong shift to mobile shopping trends, many in-depth questions about mobile shoppers' visual behaviors in mobile shopping environments remain unaddressed. This study aims to answer two challenging research questions (RQs): (a) how much does shopping motivation like goal orientation and recreation influence mobile shoppers' visual behavior toward displays of shopping information on a mobile shopping screen and (b) how much of mobile shoppers' visual behavior influences their purchase intention for the products displayed on a mobile shopping screen? An eye-tracking approach is adopted to answer the RQs empirically. The experimental results showed that goal-oriented shoppers paid closer attention to products' information areas to meet their shopping goals. Their purchase intention was positively influenced by their visual attention to the two areas of interest such as product information and consumer opinions. In contrast, recreational shoppers tended to visually fixate on the promotion area, which positively influences their purchase intention. The results contribute to understanding mobile shoppers' visual behaviors and shopping intentions from the perspective of mindset theory.

  7. Your Child's Vision

    MedlinePlus

    ... 3½, kids should have eye health screenings and visual acuity tests (tests that measure sharpness of vision) ... eye rubbing extreme light sensitivity poor focusing poor visual tracking (following an object) abnormal alignment or movement ...

  8. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Bell, Joseph L. (Inventor)

    1996-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  9. Global Positioning System Synchronized Active Light Autonomous Docking System

    NASA Technical Reports Server (NTRS)

    Howard, Richard (Inventor)

    1994-01-01

    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.

  10. Through Their Eyes: Tracking the Gaze of Students in a Geology Field Course

    ERIC Educational Resources Information Center

    Maltese, Adam V.; Balliet, Russell N.; Riggs, Eric M.

    2013-01-01

    The focus of this research was to investigate how students learn to do fieldwork through observation. This study addressed the following questions: (1) Can mobile eye-tracking devices provide a robust source of data to investigate the observations and workflow of novice students while participating in a field exercise? If so, what are the…

  11. Visual attention on a respiratory function monitor during simulated neonatal resuscitation: an eye-tracking study.

    PubMed

    Katz, Trixie A; Weinberg, Danielle D; Fishman, Claire E; Nadkarni, Vinay; Tremoulet, Patrice; Te Pas, Arjan B; Sarcevic, Aleksandra; Foglia, Elizabeth E

    2018-06-14

    A respiratory function monitor (RFM) may improve positive pressure ventilation (PPV) technique, but many providers do not use RFM data appropriately during delivery room resuscitation. We sought to use eye-tracking technology to identify RFM parameters that neonatal providers view most commonly during simulated PPV. Mixed methods study. Neonatal providers performed RFM-guided PPV on a neonatal manikin while wearing eye-tracking glasses to quantify visual attention on displayed RFM parameters (ie, exhaled tidal volume, flow, leak). Participants subsequently provided qualitative feedback on the eye-tracking glasses. Level 3 academic neonatal intensive care unit. Twenty neonatal resuscitation providers. Visual attention: overall gaze sample percentage; total gaze duration, visit count and average visit duration for each displayed RFM parameter. Qualitative feedback: willingness to wear eye-tracking glasses during clinical resuscitation. Twenty providers participated in this study. The mean gaze sample captured wa s 93% (SD 4%). Exhaled tidal volume waveform was the RFM parameter with the highest total gaze duration (median 23%, IQR 13-51%), highest visit count (median 5.17 per 10 s, IQR 2.82-6.16) and longest visit duration (median 0.48 s, IQR 0.38-0.81 s). All participants were willing to wear the glasses during clinical resuscitation. Wearable eye-tracking technology is feasible to identify gaze fixation on the RFM display and is well accepted by providers. Neonatal providers look at exhaled tidal volume more than any other RFM parameter. Future applications of eye-tracking technology include use during clinical resuscitation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. Hybrid tracking and control system for computer-aided retinal surgery

    NASA Astrophysics Data System (ADS)

    Ferguson, R. D.; Wright, Cameron H. G.; Rylander, Henry G., III; Welch, Ashley J.; Barrett, Steven F.

    1996-05-01

    We describe initial experimental results of a new hybrid digital and analog design for retinal tracking and laser beam control. Initial results demonstrate tracking rates which exceed the equivalent of 50 degrees per second in the eye, with automatic lesion pattern creation and robust loss of lock detection. Robotically assisted laser surgery to treat conditions such as diabetic retinopathy, macular degeneration, and retinal tears can now be realized under clinical conditions with requisite safety using standard video hardware and inexpensive optical components.

  13. [Visual representation of natural scenes in flicker changes].

    PubMed

    Nakashima, Ryoichi; Yokosawa, Kazuhiko

    2010-08-01

    Coherence theory in scene perception (Rensink, 2002) assumes the retention of volatile object representations on which attention is not focused. On the other hand, visual memory theory in scene perception (Hollingworth & Henderson, 2002) assumes that robust object representations are retained. In this study, we hypothesized that the difference between these two theories is derived from the difference of the experimental tasks that they are based on. In order to verify this hypothesis, we examined the properties of visual representation by using a change detection and memory task in a flicker paradigm. We measured the representations when participants were instructed to search for a change in a scene, and compared them with the intentional memory representations. The visual representations were retained in visual long-term memory even in the flicker paradigm, and were as robust as the intentional memory representations. However, the results indicate that the representations are unavailable for explicitly localizing a scene change, but are available for answering the recognition test. This suggests that coherence theory and visual memory theory are compatible.

  14. Laser-based pedestrian tracking in outdoor environments by multiple mobile robots.

    PubMed

    Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko

    2012-10-29

    This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures.

  15. A Robust Random Forest-Based Approach for Heart Rate Monitoring Using Photoplethysmography Signal Contaminated by Intense Motion Artifacts.

    PubMed

    Ye, Yalan; He, Wenwen; Cheng, Yunfei; Huang, Wenxia; Zhang, Zhilin

    2017-02-16

    The estimation of heart rate (HR) based on wearable devices is of interest in fitness. Photoplethysmography (PPG) is a promising approach to estimate HR due to low cost; however, it is easily corrupted by motion artifacts (MA). In this work, a robust approach based on random forest is proposed for accurately estimating HR from the photoplethysmography signal contaminated by intense motion artifacts, consisting of two stages. Stage 1 proposes a hybrid method to effectively remove MA with a low computation complexity, where two MA removal algorithms are combined by an accurate binary decision algorithm whose aim is to decide whether or not to adopt the second MA removal algorithm. Stage 2 proposes a random forest-based spectral peak-tracking algorithm, whose aim is to locate the spectral peak corresponding to HR, formulating the problem of spectral peak tracking into a pattern classification problem. Experiments on the PPG datasets including 22 subjects used in the 2015 IEEE Signal Processing Cup showed that the proposed approach achieved the average absolute error of 1.65 beats per minute (BPM) on the 22 PPG datasets. Compared to state-of-the-art approaches, the proposed approach has better accuracy and robustness to intense motion artifacts, indicating its potential use in wearable sensors for health monitoring and fitness tracking.

  16. Intelligent nonsingular terminal sliding-mode control using MIMO Elman neural network for piezo-flexural nanopositioning stage.

    PubMed

    Lin, Faa-Jeng; Lee, Shih-Yang; Chou, Po-Huan

    2012-12-01

    The objective of this study is to develop an intelligent nonsingular terminal sliding-mode control (INTSMC) system using an Elman neural network (ENN) for the threedimensional motion control of a piezo-flexural nanopositioning stage (PFNS). First, the dynamic model of the PFNS is derived in detail. Then, to achieve robust, accurate trajectory-tracking performance, a nonsingular terminal sliding-mode control (NTSMC) system is proposed for the tracking of the reference contours. The steady-state response of the control system can be improved effectively because of the addition of the nonsingularity in the NTSMC. Moreover, to relax the requirements of the bounds and discard the switching function in NTSMC, an INTSMC system using a multi-input-multioutput (MIMO) ENN estimator is proposed to improve the control performance and robustness of the PFNS. The ENN estimator is proposed to estimate the hysteresis phenomenon and lumped uncertainty, including the system parameters and external disturbance of the PFNS online. Furthermore, the adaptive learning algorithms for the training of the parameters of the ENN online are derived using the Lyapunov stability theorem. In addition, two robust compensators are proposed to confront the minimum reconstructed errors in INTSMC. Finally, some experimental results for the tracking of various contours are given to demonstrate the validity of the proposed INTSMC system for PFNS.

  17. Visual motor response of crewmen during a simulated 90 day space mission as measured by the critical task battery

    NASA Technical Reports Server (NTRS)

    Allen, R. W.; Jex, H. R.

    1972-01-01

    In order to test various components of a regenerative life support system and to obtain data on the physiological and psychological effects of long-duration exposure to confinement in a space station atmosphere, four carefully screened young men were sealed in space station simulator for 90 days. A tracking test battery was administered during the above experiment. The battery included a clinical test (critical instability task) related to the subject's dynamic time delay, and a conventional steady tracking task, during which dynamic response (describing functions) and performance measures were obtained. Good correlation was noted between the clinical critical instability scores and more detailed tracking parameters such as dynamic time delay and gain-crossover frequency. The comprehensive data base on human operator tracking behavior obtained in this study demonstrate that sophisticated visual-motor response properties can be efficiently and reliably measured over extended periods of time.

  18. Real-time visual tracking of less textured three-dimensional objects on mobile platforms

    NASA Astrophysics Data System (ADS)

    Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2012-12-01

    Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.

  19. Vibrational spectroscopy and imaging for concurrent cellular trafficking of co-localized doxorubicin and deuterated phospholipid vesicles

    NASA Astrophysics Data System (ADS)

    Misra, S. K.; Mukherjee, P.; Ohoka, A.; Schwartz-Duval, A. S.; Tiwari, S.; Bhargava, R.; Pan, D.

    2016-01-01

    Simultaneous tracking of nanoparticles and encapsulated payload is of great importance and visualizing their activity is arduous. Here we use vibrational spectroscopy to study the in vitro tracking of co-localized lipid nanoparticles and encapsulated drug employing a model system derived from doxorubicin-encapsulated deuterated phospholipid (dodecyl phosphocholine-d38) single tailed phospholipid vesicles.Simultaneous tracking of nanoparticles and encapsulated payload is of great importance and visualizing their activity is arduous. Here we use vibrational spectroscopy to study the in vitro tracking of co-localized lipid nanoparticles and encapsulated drug employing a model system derived from doxorubicin-encapsulated deuterated phospholipid (dodecyl phosphocholine-d38) single tailed phospholipid vesicles. Electronic supplementary information (ESI) available: Raman and confocal images of the Deuto-DOX-NPs in cells, materials and details of methods. See DOI: 10.1039/c5nr07975f

  20. Performance evaluation of a kinesthetic-tactual display

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.; Dunn, R. S.

    1982-01-01

    Simulator studies demonstrated the feasibility of using kinesthetic-tactual (KT) displays for providing collective and cyclic command information, and suggested that KT displays may increase pilot workload capability. A dual-axis laboratory tracking task suggested that beyond reduction in visual scanning, there may be additional sensory or cognitive benefits to the use of multiple sensory modalities. Single-axis laboratory tracking tasks revealed performance with a quickened KT display to be equivalent to performance with a quickened visual display for a low frequency sum-of-sinewaves input. In contrast, an unquickened KT display was inferior to an unquickened visual display. Full scale simulator studies and/or inflight testing are recommended to determine the generality of these results.

Top