Sample records for human motion tracking

  1. Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion

    PubMed Central

    Filippeschi, Alessandro; Schmitz, Norbert; Miezal, Markus; Bleser, Gabriele; Ruffaldi, Emanuele; Stricker, Didier

    2017-01-01

    Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error). PMID:28587178

  2. Optimal Configuration of Human Motion Tracking Systems: A Systems Engineering Approach

    NASA Technical Reports Server (NTRS)

    Henderson, Steve

    2005-01-01

    Human motion tracking systems represent a crucial technology in the area of modeling and simulation. These systems, which allow engineers to capture human motion for study or replication in virtual environments, have broad applications in several research disciplines including human engineering, robotics, and psychology. These systems are based on several sensing paradigms, including electro-magnetic, infrared, and visual recognition. Each of these paradigms requires specialized environments and hardware configurations to optimize performance of the human motion tracking system. Ideally, these systems are used in a laboratory or other facility that was designed to accommodate the particular sensing technology. For example, electromagnetic systems are highly vulnerable to interference from metallic objects, and should be used in a specialized lab free of metal components.

  3. Dynamical simulation priors for human motion tracking.

    PubMed

    Vondrak, Marek; Sigal, Leonid; Jenkins, Odest Chadwicke

    2013-01-01

    We propose a simulation-based dynamical motion prior for tracking human motion from video in presence of physical ground-person interactions. Most tracking approaches to date have focused on efficient inference algorithms and/or learning of prior kinematic motion models; however, few can explicitly account for the physical plausibility of recovered motion. Here, we aim to recover physically plausible motion of a single articulated human subject. Toward this end, we propose a full-body 3D physical simulation-based prior that explicitly incorporates a model of human dynamics into the Bayesian filtering framework. We consider the motion of the subject to be generated by a feedback “control loop” in which Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of interaction forces, motor forces, and gravity. Interaction forces prevent physically impossible hypotheses, enable more appropriate reactions to the environment (e.g., ground contacts), and are produced from detected human-environment collisions. Motor forces actuate the body, ensure that proposed pose transitions are physically feasible, and are generated using a motion controller. For efficient inference in the resulting high-dimensional state space, we utilize an exemplar-based control strategy that reduces the effective search space of motor forces. As a result, we are able to recover physically plausible motion of human subjects from monocular and multiview video. We show, both quantitatively and qualitatively, that our approach performs favorably with respect to Bayesian filtering methods with standard motion priors.

  4. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  5. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  6. Hybrid Orientation Based Human Limbs Motion Tracking Method

    PubMed Central

    Glonek, Grzegorz; Wojciechowski, Adam

    2017-01-01

    One of the key technologies that lays behind the human–machine interaction and human motion diagnosis is the limbs motion tracking. To make the limbs tracking efficient, it must be able to estimate a precise and unambiguous position of each tracked human joint and resulting body part pose. In recent years, body pose estimation became very popular and broadly available for home users because of easy access to cheap tracking devices. Their robustness can be improved by different tracking modes data fusion. The paper defines the novel approach—orientation based data fusion—instead of dominating in literature position based approach, for two classes of tracking devices: depth sensors (i.e., Microsoft Kinect) and inertial measurement units (IMU). The detailed analysis of their working characteristics allowed to elaborate a new method that let fuse more precisely limbs orientation data from both devices and compensates their imprecisions. The paper presents the series of performed experiments that verified the method’s accuracy. This novel approach allowed to outperform the precision of position-based joints tracking, the methods dominating in the literature, of up to 18%. PMID:29232832

  7. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  8. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  9. Visual servoing for a US-guided therapeutic HIFU system by coagulated lesion tracking: a phantom study.

    PubMed

    Seo, Joonho; Koizumi, Norihiro; Funamoto, Takakazu; Sugita, Naohiko; Yoshinaka, Kiyoshi; Nomiya, Akira; Homma, Yukio; Matsumoto, Yoichiro; Mitsuishi, Mamoru

    2011-06-01

    Applying ultrasound (US)-guided high-intensity focused ultrasound (HIFU) therapy for kidney tumours is currently very difficult, due to the unclearly observed tumour area and renal motion induced by human respiration. In this research, we propose new methods by which to track the indistinct tumour area and to compensate the respiratory tumour motion for US-guided HIFU treatment. For tracking indistinct tumour areas, we detect the US speckle change created by HIFU irradiation. In other words, HIFU thermal ablation can coagulate tissue in the tumour area and an intraoperatively created coagulated lesion (CL) is used as a spatial landmark for US visual tracking. Specifically, the condensation algorithm was applied to robust and real-time CL speckle pattern tracking in the sequence of US images. Moreover, biplanar US imaging was used to locate the three-dimensional position of the CL, and a three-actuator system drives the end-effector to compensate for the motion. Finally, we tested the proposed method by using a newly devised phantom model that enables both visual tracking and a thermal response by HIFU irradiation. In the experiment, after generation of the CL in the phantom kidney, the end-effector successfully synchronized with the phantom motion, which was modelled by the captured motion data for the human kidney. The accuracy of the motion compensation was evaluated by the error between the end-effector and the respiratory motion, the RMS error of which was approximately 2 mm. This research shows that a HIFU-induced CL provides a very good landmark for target motion tracking. By using the CL tracking method, target motion compensation can be realized in the US-guided robotic HIFU system. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Whole-Body Human Inverse Dynamics with Distributed Micro-Accelerometers, Gyros and Force Sensing †

    PubMed Central

    Latella, Claudia; Kuppuswamy, Naveen; Romano, Francesco; Traversaro, Silvio; Nori, Francesco

    2016-01-01

    Human motion tracking is a powerful tool used in a large range of applications that require human movement analysis. Although it is a well-established technique, its main limitation is the lack of estimation of real-time kinetics information such as forces and torques during the motion capture. In this paper, we present a novel approach for a human soft wearable force tracking for the simultaneous estimation of whole-body forces along with the motion. The early stage of our framework encompasses traditional passive marker based methods, inertial and contact force sensor modalities and harnesses a probabilistic computational technique for estimating dynamic quantities, originally proposed in the domain of humanoid robot control. We present experimental analysis on subjects performing a two degrees-of-freedom bowing task, and we estimate the motion and kinetics quantities. The results demonstrate the validity of the proposed method. We discuss the possible use of this technique in the design of a novel soft wearable force tracking device and its potential applications. PMID:27213394

  11. Man-in-the-loop study of filtering in airborne head tracking tasks

    NASA Technical Reports Server (NTRS)

    Lifshitz, S.; Merhav, S. J.

    1992-01-01

    A human-factors study is conducted of problems due to vibrations during the use of a helmet-mounted display (HMD) in tracking tasks whose major factors are target motion and head vibration. A method is proposed for improving aiming accuracy in such tracking tasks on the basis of (1) head-motion measurement and (2) the shifting of the reticle in the HMD in ways that inhibit much of the involuntary apparent motion of the reticle, relative to the target, and the nonvoluntary motion of the teleoperated device. The HMD inherently furnishes the visual feedback required by this scheme.

  12. An analysis of the precision and reliability of the leap motion sensor and its suitability for static and dynamic tracking.

    PubMed

    Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka

    2014-02-21

    We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.

  13. An Analysis of the Precision and Reliability of the Leap Motion Sensor and Its Suitability for Static and Dynamic Tracking

    PubMed Central

    Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka

    2014-01-01

    We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635

  14. Ocular tracking responses to background motion gated by feature-based attention.

    PubMed

    Souto, David; Kerzel, Dirk

    2014-09-01

    Involuntary ocular tracking responses to background motion offer a window on the dynamics of motion computations. In contrast to spatial attention, we know little about the role of feature-based attention in determining this ocular response. To probe feature-based effects of background motion on involuntary eye movements, we presented human observers with a balanced background perturbation. Two clouds of dots moved in opposite vertical directions while observers tracked a target moving in horizontal direction. Additionally, they had to discriminate a change in the direction of motion (±10° from vertical) of one of the clouds. A vertical ocular following response occurred in response to the motion of the attended cloud. When motion selection was based on motion direction and color of the dots, the peak velocity of the tracking response was 30% of the tracking response elicited in a single task with only one direction of background motion. In two other experiments, we tested the effect of the perturbation when motion selection was based on color, by having motion direction vary unpredictably, or on motion direction alone. Although the gain of pursuit in the horizontal direction was significantly reduced in all experiments, indicating a trade-off between perceptual and oculomotor tasks, ocular responses to perturbations were only observed when selection was based on both motion direction and color. It appears that selection by motion direction can only be effective for driving ocular tracking when the relevant elements can be segregated before motion onset. Copyright © 2014 the American Physiological Society.

  15. Detection and tracking of human targets in indoor and urban environments using through-the-wall radar sensors

    NASA Astrophysics Data System (ADS)

    Radzicki, Vincent R.; Boutte, David; Taylor, Paul; Lee, Hua

    2017-05-01

    Radar based detection of human targets behind walls or in dense urban environments is an important technical challenge with many practical applications in security, defense, and disaster recovery. Radar reflections from a human can be orders of magnitude weaker than those from objects encountered in urban settings such as walls, cars, or possibly rubble after a disaster. Furthermore, these objects can act as secondary reflectors and produce multipath returns from a person. To mitigate these issues, processing of radar return data needs to be optimized for recognizing human motion features such as walking, running, or breathing. This paper presents a theoretical analysis on the modulation effects human motion has on the radar waveform and how high levels of multipath can distort these motion effects. From this analysis, an algorithm is designed and optimized for tracking human motion in heavily clutter environments. The tracking results will be used as the fundamental detection/classification tool to discriminate human targets from others by identifying human motion traits such as predictable walking patterns and periodicity in breathing rates. The theoretical formulations will be tested against simulation and measured data collected using a low power, portable see-through-the-wall radar system that could be practically deployed in real-world scenarios. Lastly, the performance of the algorithm is evaluated in a series of experiments where both a single person and multiple people are moving in an indoor, cluttered environment.

  16. Hybrid markerless tracking of complex articulated motion in golf swings.

    PubMed

    Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar

    2014-04-01

    Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis

    PubMed Central

    Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan

    2015-01-01

    Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761

  18. Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.

    PubMed

    Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong

    2016-08-01

    The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.

  19. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  20. Sensing human hand motions for controlling dexterous robots

    NASA Technical Reports Server (NTRS)

    Marcus, Beth A.; Churchill, Philip J.; Little, Arthur D.

    1988-01-01

    The Dexterous Hand Master (DHM) system is designed to control dexterous robot hands such as the UTAH/MIT and Stanford/JPL hands. It is the first commercially available device which makes it possible to accurately and confortably track the complex motion of the human finger joints. The DHM is adaptable to a wide variety of human hand sizes and shapes, throughout their full range of motion.

  1. Accounting for direction and speed of eye motion in planning visually guided manual tracking.

    PubMed

    Leclercq, Guillaume; Blohm, Gunnar; Lefèvre, Philippe

    2013-10-01

    Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.

  2. A low cost real-time motion tracking approach using webcam technology.

    PubMed

    Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh

    2015-02-05

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. A low cost real-time motion tracking approach using webcam technology

    PubMed Central

    Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh

    2014-01-01

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306

  4. Feature tracking for automated volume of interest stabilization on 4D-OCT images

    NASA Astrophysics Data System (ADS)

    Laves, Max-Heinrich; Schoob, Andreas; Kahrs, Lüder A.; Pfeiffer, Tom; Huber, Robert; Ortmaier, Tobias

    2017-03-01

    A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance.

  5. Modeling human tracking error in several different anti-tank systems

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1981-01-01

    An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.

  6. U.S. Marine Corps Training Modeling and Simulation Master Plan

    DTIC Science & Technology

    2007-01-18

    is needed that is not restricted by line of sight (LOS) and is transportable/ deployable. • The LVC-TE must have the ability to have Human Anatomy Motion... Human Anatomy Motion-Tracking and Display HEAT.............................HMMWV Egress Assistance Trainer HLA

  7. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  8. KSC-08pd1901

    NASA Image and Video Library

    2008-07-02

    CAPE CANAVERAL, Fla. – Professor Peter Voci, NYIT MOCAP (Motion Capture) team director, (left) hands a component of the Orion Crew Module mockup to one of three technicians inside the mockup. The technicians wear motion capture suits. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.

  9. Freestanding Triboelectric Nanogenerator Enables Noncontact Motion-Tracking and Positioning.

    PubMed

    Guo, Huijuan; Jia, Xueting; Liu, Lue; Cao, Xia; Wang, Ning; Wang, Zhong Lin

    2018-04-24

    Recent development of interactive motion-tracking and positioning technologies is attracting increasing interests in many areas, such as wearable electronics, intelligent electronics, and the internet of things. For example, the so-called somatosensory technology can afford users strong empathy of immersion and realism due to their consistent interaction with the game. Here, we report a noncontact self-powered positioning and motion-tracking system based on a freestanding triboelectric nanogenerator (TENG). The TENG was fabricated by a nanoengineered surface in the contact-separation mode with the use of a free moving human body (hands or feet) as the trigger. The poly(tetrafluoroethylene) (PTFE) arrays based interactive interface can give an output of 222 V from casual human motions. Different from previous works, this device also responses to a small action at certain heights of 0.01-0.11 m from the device with a sensitivity of about 315 V·m -1 , so that the mechanical sensing is possible. Such a distinctive noncontact sensing feature promotes a wide range of potential applications in smart interaction systems.

  10. Methods for motion correction evaluation using 18F-FDG human brain scans on a high-resolution PET scanner.

    PubMed

    Keller, Sune H; Sibomana, Merence; Olesen, Oline V; Svarer, Claus; Holm, Søren; Andersen, Flemming L; Højgaard, Liselotte

    2012-03-01

    Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Two scans with minor motion and 5 with major motion (as reported by the optical motion tracking system) were selected from (18)F-FDG scans acquired on a PET scanner. The motion was measured as the maximum displacement of the markers attached to the subject's head and was considered to be major if larger than 4 mm and minor if less than 2 mm. After allowing a 40- to 60-min uptake time after tracer injection, we acquired a 6-min transmission scan, followed by a 40-min emission list-mode scan. Each emission list-mode dataset was divided into 8 frames of 5 min. The reconstructed time-framed images were aligned to a selected reference frame using either EMT or the AIR (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. The results of the 3 QC methods were in agreement with one another and with a visual subjective inspection of the image data. Before MC, the QC method measures varied significantly in scans with major motion and displayed limited variations on scans with minor motion. The variation was significantly reduced and measures improved after MC with AIR, whereas EMT MC performed less well. The 3 presented QC methods produced similar results and are useful for evaluating tracer-independent external-tracking motion-correction methods for human brain scans.

  11. Markerless human motion tracking using hierarchical multi-swarm cooperative particle swarm optimization.

    PubMed

    Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah

    2015-01-01

    The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.

  12. Motion-compensated compressed sensing for dynamic contrast-enhanced MRI using regional spatiotemporal sparsity and region tracking: Block LOw-rank Sparsity with Motion-guidance (BLOSM)

    PubMed Central

    Chen, Xiao; Salerno, Michael; Yang, Yang; Epstein, Frederick H.

    2014-01-01

    Purpose Dynamic contrast-enhanced MRI of the heart is well-suited for acceleration with compressed sensing (CS) due to its spatiotemporal sparsity; however, respiratory motion can degrade sparsity and lead to image artifacts. We sought to develop a motion-compensated CS method for this application. Methods A new method, Block LOw-rank Sparsity with Motion-guidance (BLOSM), was developed to accelerate first-pass cardiac MRI, even in the presence of respiratory motion. This method divides the images into regions, tracks the regions through time, and applies matrix low-rank sparsity to the tracked regions. BLOSM was evaluated using computer simulations and first-pass cardiac datasets from human subjects. Using rate-4 acceleration, BLOSM was compared to other CS methods such as k-t SLR that employs matrix low-rank sparsity applied to the whole image dataset, with and without motion tracking, and to k-t FOCUSS with motion estimation and compensation that employs spatial and temporal-frequency sparsity. Results BLOSM was qualitatively shown to reduce respiratory artifact compared to other methods. Quantitatively, using root mean squared error and the structural similarity index, BLOSM was superior to other methods. Conclusion BLOSM, which exploits regional low rank structure and uses region tracking for motion compensation, provides improved image quality for CS-accelerated first-pass cardiac MRI. PMID:24243528

  13. A sensor fusion method for tracking vertical velocity and height based on inertial and barometric altimeter measurements.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2014-07-24

    A sensor fusion method was developed for vertical channel stabilization by fusing inertial measurements from an Inertial Measurement Unit (IMU) and pressure altitude measurements from a barometric altimeter integrated in the same device (baro-IMU). An Extended Kalman Filter (EKF) estimated the quaternion from the sensor frame to the navigation frame; the sensed specific force was rotated into the navigation frame and compensated for gravity, yielding the vertical linear acceleration; finally, a complementary filter driven by the vertical linear acceleration and the measured pressure altitude produced estimates of height and vertical velocity. A method was also developed to condition the measured pressure altitude using a whitening filter, which helped to remove the short-term correlation due to environment-dependent pressure changes from raw pressure altitude. The sensor fusion method was implemented to work on-line using data from a wireless baro-IMU and tested for the capability of tracking low-frequency small-amplitude vertical human-like motions that can be critical for stand-alone inertial sensor measurements. Validation tests were performed in different experimental conditions, namely no motion, free-fall motion, forced circular motion and squatting. Accurate on-line tracking of height and vertical velocity was achieved, giving confidence to the use of the sensor fusion method for tracking typical vertical human motions: velocity Root Mean Square Error (RMSE) was in the range 0.04-0.24 m/s; height RMSE was in the range 5-68 cm, with statistically significant performance gains when the whitening filter was used by the sensor fusion method to track relatively high-frequency vertical motions.

  14. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  15. Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid.

    PubMed

    Sumida, Iori; Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko

    2016-03-08

    Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film-based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers' abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one-dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers' breathing patterns, the mean tracking error range was 0.78-1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient.

  16. Manifolds for pose tracking from monocular video

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  17. Violating instructed human agency: An fMRI study on ocular tracking of biological and nonbiological motion stimuli.

    PubMed

    Gertz, Hanna; Hilger, Maximilian; Hegele, Mathias; Fiehler, Katja

    2016-09-01

    Previous studies have shown that beliefs about the human origin of a stimulus are capable of modulating the coupling of perception and action. Such beliefs can be based on top-down recognition of the identity of an actor or bottom-up observation of the behavior of the stimulus. Instructed human agency has been shown to lead to superior tracking performance of a moving dot as compared to instructed computer agency, especially when the dot followed a biological velocity profile and thus matched the predicted movement, whereas a violation of instructed human agency by a nonbiological dot motion impaired oculomotor tracking (Zwickel et al., 2012). This suggests that the instructed agency biases the selection of predictive models on the movement trajectory of the dot motion. The aim of the present fMRI study was to examine the neural correlates of top-down and bottom-up modulations of perception-action couplings by manipulating the instructed agency (human action vs. computer-generated action) and the observable behavior of the stimulus (biological vs. nonbiological velocity profile). To this end, participants performed an oculomotor tracking task in an MRI environment. Oculomotor tracking activated areas of the eye movement network. A right-hemisphere occipito-temporal cluster comprising the motion-sensitive area V5 showed a preference for the biological as compared to the nonbiological velocity profile. Importantly, a mismatch between instructed human agency and a nonbiological velocity profile primarily activated medial-frontal areas comprising the frontal pole, the paracingulate gyrus, and the anterior cingulate gyrus, as well as the cerebellum and the supplementary eye field as part of the eye movement network. This mismatch effect was specific to the instructed human agency and did not occur in conditions with a mismatch between instructed computer agency and a biological velocity profile. Our results support the hypothesis that humans activate a specific predictive model for biological movements based on their own motor expertise. A violation of this predictive model causes costs as the movement needs to be corrected in accordance with incoming (nonbiological) sensory information. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Temporal dynamics of 2D motion integration for ocular following in macaque monkeys.

    PubMed

    Barthélemy, Fréderic V; Fleuriet, Jérome; Masson, Guillaume S

    2010-03-01

    Several recent studies have shown that extracting pattern motion direction is a dynamical process where edge motion is first extracted and pattern-related information is encoded with a small time lag by MT neurons. A similar dynamics was found for human reflexive or voluntary tracking. Here, we bring an essential, but still missing, piece of information by documenting macaque ocular following responses to gratings, unikinetic plaids, and barber-poles. We found that ocular tracking was always initiated first in the grating motion direction with ultra-short latencies (approximately 55 ms). A second component was driven only 10-15 ms later, rotating tracking toward pattern motion direction. At the end the open-loop period, tracking direction was aligned with pattern motion direction (plaids) or the average of the line-ending motion directions (barber-poles). We characterized the dependency on contrast of each component. Both timing and direction of ocular following were quantitatively very consistent with the dynamics of neuronal responses reported by others. Overall, we found a remarkable consistency between neuronal dynamics and monkey behavior, advocating for a direct link between the neuronal solution of the aperture problem and primate perception and action.

  19. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.

  20. Technical Note: A respiratory monitoring and processing system based on computer vision: prototype and proof of principle

    PubMed Central

    Atallah, Vincent; Escarmant, Patrick; Vinh‐Hung, Vincent

    2016-01-01

    Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in‐house‐made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real‐time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high‐contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep‐breathing patterns. This low‐cost, computer‐vision system for real‐time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion. PACS number(s): 87.55.km PMID:27685116

  1. Technical Note: A respiratory monitoring and processing system based on computer vision: prototype and proof of principle.

    PubMed

    Leduc, Nicolas; Atallah, Vincent; Escarmant, Patrick; Vinh-Hung, Vincent

    2016-09-08

    Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in-house-made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real-time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high-contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep-breathing patterns. This low-cost, computer-vision system for real-time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion.v. © 2016 The Authors.

  2. Experimental verification of a two-dimensional respiratory motion compensation system with ultrasound tracking technique in radiation therapy.

    PubMed

    Ting, Lai-Lei; Chuang, Ho-Chiao; Liao, Ai-Ho; Kuo, Chia-Chun; Yu, Hsiao-Wei; Zhou, Yi-Liang; Tien, Der-Chi; Jeng, Shiu-Chen; Chiou, Jeng-Fong

    2018-05-01

    This study proposed respiratory motion compensation system (RMCS) combined with an ultrasound image tracking algorithm (UITA) to compensate for respiration-induced tumor motion during radiotherapy, and to address the problem of inaccurate radiation dose delivery caused by respiratory movement. This study used an ultrasound imaging system to monitor respiratory movements combined with the proposed UITA and RMCS for tracking and compensation of the respiratory motion. Respiratory motion compensation was performed using prerecorded human respiratory motion signals and also sinusoidal signals. A linear accelerator was used to deliver radiation doses to GAFchromic EBT3 dosimetry film, and the conformity index (CI), root-mean-square error, compensation rate (CR), and planning target volume (PTV) were used to evaluate the tracking and compensation performance of the proposed system. Human respiratory pattern signals were captured using the UITA and compensated by the RMCS, which yielded CR values of 34-78%. In addition, the maximum coronal area of the PTV ranged from 85.53 mm 2 to 351.11 mm 2 (uncompensated), which reduced to from 17.72 mm 2 to 66.17 mm 2 after compensation, with an area reduction ratio of up to 90%. In real-time monitoring of the respiration compensation state, the CI values for 85% and 90% isodose areas increased to 0.7 and 0.68, respectively. The proposed UITA and RMCS can reduce the movement of the tracked target relative to the LINAC in radiation therapy, thereby reducing the required size of the PTV margin and increasing the effect of the radiation dose received by the treatment target. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. A Sensor Fusion Method for Tracking Vertical Velocity and Height Based on Inertial and Barometric Altimeter Measurements

    PubMed Central

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2014-01-01

    A sensor fusion method was developed for vertical channel stabilization by fusing inertial measurements from an Inertial Measurement Unit (IMU) and pressure altitude measurements from a barometric altimeter integrated in the same device (baro-IMU). An Extended Kalman Filter (EKF) estimated the quaternion from the sensor frame to the navigation frame; the sensed specific force was rotated into the navigation frame and compensated for gravity, yielding the vertical linear acceleration; finally, a complementary filter driven by the vertical linear acceleration and the measured pressure altitude produced estimates of height and vertical velocity. A method was also developed to condition the measured pressure altitude using a whitening filter, which helped to remove the short-term correlation due to environment-dependent pressure changes from raw pressure altitude. The sensor fusion method was implemented to work on-line using data from a wireless baro-IMU and tested for the capability of tracking low-frequency small-amplitude vertical human-like motions that can be critical for stand-alone inertial sensor measurements. Validation tests were performed in different experimental conditions, namely no motion, free-fall motion, forced circular motion and squatting. Accurate on-line tracking of height and vertical velocity was achieved, giving confidence to the use of the sensor fusion method for tracking typical vertical human motions: velocity Root Mean Square Error (RMSE) was in the range 0.04–0.24 m/s; height RMSE was in the range 5–68 cm, with statistically significant performance gains when the whitening filter was used by the sensor fusion method to track relatively high-frequency vertical motions. PMID:25061835

  4. Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid

    PubMed Central

    Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko

    2016-01-01

    Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film‐based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers’ abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one‐dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers’ breathing patterns, the mean tracking error range was 0.78‐1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient. PACS number(s): 87.55.D‐, 87.55.km, 87.55.Qr, 87.56.Fc PMID:27074474

  5. Direction of Perceived Motion and Eye Movements Show Similar Biases for Asymmetrically Windowed Moving Plaids

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Mulligan, J. B.; Stone, L. S.; Hargens, Alan R. (Technical Monitor)

    1995-01-01

    We have shown that moving a plaid in an asymmetric window biases the perceived direction of motion (Beutter, Mulligan & Stone, ARVO 1994). We now explore whether these biased motion signals might also drive the smooth eye-movement response by comparing the perceived and tracked directions. The human smooth oculomotor response to moving plaids appears to be driven by the perceived rather than the veridical direction of motion. This suggests that human motion perception and smooth eye movements share underlying neural motion-processing substrates as has already been shown to be true for monkeys.

  6. Training industrial robots with gesture recognition techniques

    NASA Astrophysics Data System (ADS)

    Piane, Jennifer; Raicu, Daniela; Furst, Jacob

    2013-01-01

    In this paper we propose to use gesture recognition approaches to track a human hand in 3D space and, without the use of special clothing or markers, be able to accurately generate code for training an industrial robot to perform the same motion. The proposed hand tracking component includes three methods: a color-thresholding model, naïve Bayes analysis and Support Vector Machine (SVM) to detect the human hand. Next, it performs stereo matching on the region where the hand was detected to find relative 3D coordinates. The list of coordinates returned is expectedly noisy due to the way the human hand can alter its apparent shape while moving, the inconsistencies in human motion and detection failures in the cluttered environment. Therefore, the system analyzes the list of coordinates to determine a path for the robot to move, by smoothing the data to reduce noise and looking for significant points used to determine the path the robot will ultimately take. The proposed system was applied to pairs of videos recording the motion of a human hand in a „real‟ environment to move the end-affector of a SCARA robot along the same path as the hand of the person in the video. The correctness of the robot motion was determined by observers indicating that motion of the robot appeared to match the motion of the video.

  7. 1200130

    NASA Image and Video Library

    2012-03-19

    PETER MA, EV74, WEARS A SUIT COVERED WITH SPHERICAL REFLECTORS THAT ENABLE HIS MOTIONS TO BE TRACKED BY THE MOTION CAPTURE SYSTEM. THE HUMAN MODEL IN RED ON THE SCREEN IN THE BACKGROUND REPRESENTS THE SYSTEM-GENERATED IMAGE OF PETER'S POSITION.

  8. Human motion tracking by temporal-spatial local gaussian process experts.

    PubMed

    Zhao, Xu; Fu, Yun; Liu, Yuncai

    2011-04-01

    Human pose estimation via motion tracking systems can be considered as a regression problem within a discriminative framework. It is always a challenging task to model the mapping from observation space to state space because of the high-dimensional characteristic in the multimodal conditional distribution. In order to build the mapping, existing techniques usually involve a large set of training samples in the learning process which are limited in their capability to deal with multimodality. We propose, in this work, a novel online sparse Gaussian Process (GP) regression model to recover 3-D human motion in monocular videos. Particularly, we investigate the fact that for a given test input, its output is mainly determined by the training samples potentially residing in its local neighborhood and defined in the unified input-output space. This leads to a local mixture GP experts system composed of different local GP experts, each of which dominates a mapping behavior with the specific covariance function adapting to a local region. To handle the multimodality, we combine both temporal and spatial information therefore to obtain two categories of local experts. The temporal and spatial experts are integrated into a seamless hybrid system, which is automatically self-initialized and robust for visual tracking of nonlinear human motion. Learning and inference are extremely efficient as all the local experts are defined online within very small neighborhoods. Extensive experiments on two real-world databases, HumanEva and PEAR, demonstrate the effectiveness of our proposed model, which significantly improve the performance of existing models.

  9. Object motion computation for the initiation of smooth pursuit eye movements in humans.

    PubMed

    Wallace, Julian M; Stone, Leland S; Masson, Guillaume S

    2005-04-01

    Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.

  10. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  11. A framework for activity detection in wide-area motion imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D

    2009-01-01

    Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less

  12. Comparison of method using phase-sensitive motion estimator with speckle tracking method and application to measurement of arterial wall motion

    NASA Astrophysics Data System (ADS)

    Miyajo, Akira; Hasegawa, Hideyuki

    2018-07-01

    At present, the speckle tracking method is widely used as a two- or three-dimensional (2D or 3D) motion estimator for the measurement of cardiovascular dynamics. However, this method requires high-level interpolation of a function, which evaluates the similarity between ultrasonic echo signals in two frames, to estimate a subsample small displacement in high-frame-rate ultrasound, which results in a high computational cost. To overcome this problem, a 2D motion estimator using the 2D Fourier transform, which does not require any interpolation process, was proposed by our group. In this study, we compared the accuracies of the speckle tracking method and our method using a 2D motion estimator, and applied the proposed method to the measurement of motion of a human carotid arterial wall. The bias error and standard deviation in the lateral velocity estimates obtained by the proposed method were 0.048 and 0.282 mm/s, respectively, which were significantly better than those (‑0.366 and 1.169 mm/s) obtained by the speckle tracking method. The calculation time of the proposed phase-sensitive method was 97% shorter than the speckle tracking method. Furthermore, the in vivo experimental results showed that a characteristic change in velocity around the carotid bifurcation could be detected by the proposed method.

  13. Model-based extended quaternion Kalman filter to inertial orientation tracking of arbitrary kinematic chains.

    PubMed

    Szczęsna, Agnieszka; Pruszowski, Przemysław

    2016-01-01

    Inertial orientation tracking is still an area of active research, especially in the context of out-door, real-time, human motion capture. Existing systems either propose loosely coupled tracking approaches where each segment is considered independently, taking the resulting drawbacks into account, or tightly coupled solutions that are limited to a fixed chain with few segments. Such solutions have no flexibility to change the skeleton structure, are dedicated to a specific set of joints, and have high computational complexity. This paper describes the proposal of a new model-based extended quaternion Kalman filter that allows for estimation of orientation based on outputs from the inertial measurements unit sensors. The filter considers interdependencies resulting from the construction of the kinematic chain so that the orientation estimation is more accurate. The proposed solution is a universal filter that does not predetermine the degree of freedom at the connections between segments of the model. To validation the motion of 3-segments single link pendulum captured by optical motion capture system is used. The next step in the research will be to use this method for inertial motion capture with a human skeleton model.

  14. The Vestibular System and Human Dynamic Space Orientation

    NASA Technical Reports Server (NTRS)

    Meiry, J. L.

    1966-01-01

    The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.

  15. Real time eye tracking using Kalman extended spatio-temporal context learning

    NASA Astrophysics Data System (ADS)

    Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu

    2017-06-01

    Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.

  16. Binocular eye movement control and motion perception: what is being tracked?

    PubMed

    van der Steen, Johannes; Dits, Joyce

    2012-10-19

    We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.

  17. Binocular Eye Movement Control and Motion Perception: What Is Being Tracked?

    PubMed Central

    van der Steen, Johannes; Dits, Joyce

    2012-01-01

    Purpose. We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. Methods. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Results. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. Conclusions. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking. PMID:22997286

  18. Robustness of external/internal correlation models for real-time tumor tracking to breathing motion variations

    NASA Astrophysics Data System (ADS)

    Seregni, M.; Cerveri, P.; Riboldi, M.; Pella, A.; Baroni, G.

    2012-11-01

    In radiotherapy, organ motion mitigation by means of dynamic tumor tracking requires continuous information about the internal tumor position, which can be estimated relying on external/internal correlation models as a function of external surface surrogates. In this work, we propose a validation of a time-independent artificial neural networks-based tumor tracking method in the presence of changes in the breathing pattern, evaluating the performance on two datasets. First, simulated breathing motion traces were specifically generated to include gradually increasing respiratory irregularities. Then, seven publically available human liver motion traces were analyzed for the assessment of tracking accuracy, whose sensitivity with respect to the structural parameters of the model was also investigated. Results on simulated data showed that the proposed method was not affected by hysteretic target trajectories and it was able to cope with different respiratory irregularities, such as baseline drift and internal/external phase shift. The analysis of the liver motion traces reported an average RMS error equal to 1.10 mm, with five out of seven cases below 1 mm. In conclusion, this validation study proved that the proposed method is able to deal with respiratory irregularities both in controlled and real conditions.

  19. Effects of Visual Propioceptive Cue Conflicts on Human Tracking Performance

    DTIC Science & Technology

    1977-06-01

    maintain adequate iwifommsie it *a necessaty for the Subjects to dusregaril sensations of motion. The results rewaead that the conditions of...discussions. Dr. George L. Smith served as the Graduate School Representative on the comittee. The research reported herein was conducted at the Advanced...where no motion cues art . provided or when Motion cues are inappropriate to actual flight conditions. The latter (i.e., inappropriate motion) has

  20. Human pose tracking from monocular video by traversing an image motion mapped body pose manifold

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2010-01-01

    Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.

  1. A biplanar X-ray approach for studying the 3D dynamics of human track formation.

    PubMed

    Hatala, Kevin G; Perry, David A; Gatesy, Stephen M

    2018-05-09

    Recent discoveries have made hominin tracks an increasingly prevalent component of the human fossil record, and these data have the capacity to inform long-standing debates regarding the biomechanics of hominin locomotion. However, there is currently no consensus on how to decipher biomechanical variables from hominin tracks. These debates can be linked to our generally limited understanding of the complex interactions between anatomy, motion, and substrate that give rise to track morphology. These interactions are difficult to study because direct visualization of the track formation process is impeded by foot and substrate opacity. To address these obstacles, we developed biplanar X-ray and computer animation methods, derived from X-ray Reconstruction of Moving Morphology (XROMM), to analyze the 3D dynamics of three human subjects' feet as they walked across four substrates (three deformable muds and rigid composite panel). By imaging and reconstructing 3D positions of external markers, we quantified the 3D dynamics at the foot-substrate interface. Foot shape, specifically heel and medial longitudinal arch deformation, was significantly affected by substrate rigidity. In deformable muds, we found that depths measured across tracks did not directly reflect the motions of the corresponding regions of the foot, and that track outlines were not perfectly representative of foot size. These results highlight the complex, dynamic nature of track formation, and the experimental methods presented here offer a promising avenue for developing and refining methods for accurately inferring foot anatomy and gait biomechanics from fossil hominin tracks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Human body contour data based activity recognition.

    PubMed

    Myagmarbayar, Nergui; Yuki, Yoshida; Imamoglu, Nevrez; Gonzalez, Jose; Otake, Mihoko; Yu, Wenwei

    2013-01-01

    This research work is aimed to develop autonomous bio-monitoring mobile robots, which are capable of tracking and measuring patients' motions, recognizing the patients' behavior based on observation data, and providing calling for medical personnel in emergency situations in home environment. The robots to be developed will bring about cost-effective, safe and easier at-home rehabilitation to most motor-function impaired patients (MIPs). In our previous research, a full framework was established towards this research goal. In this research, we aimed at improving the human activity recognition by using contour data of the tracked human subject extracted from the depth images as the signal source, instead of the lower limb joint angle data used in the previous research, which are more likely to be affected by the motion of the robot and human subjects. Several geometric parameters, such as, the ratio of height to weight of the tracked human subject, and distance (pixels) between centroid points of upper and lower parts of human body, were calculated from the contour data, and used as the features for the activity recognition. A Hidden Markov Model (HMM) is employed to classify different human activities from the features. Experimental results showed that the human activity recognition could be achieved with a high correct rate.

  3. Alcohol and disorientation-related responses. III, Effects of alcohol ingestion on tracking performance during angular acceleration.

    DOT National Transportation Integrated Search

    1971-04-01

    Most studies of the effects of alcohol on human performance involve static (absence of motion) situations. However, the addition of motion, involved in such activities as piloting an aircraft, might well produce impairments not usually obtained in st...

  4. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  5. Auto-tracking system for human lumbar motion analysis.

    PubMed

    Sui, Fuge; Zhang, Da; Lam, Shing Chun Benny; Zhao, Lifeng; Wang, Dongjun; Bi, Zhenggang; Hu, Yong

    2011-01-01

    Previous lumbar motion analyses suggest the usefulness of quantitatively characterizing spine motion. However, the application of such measurements is still limited by the lack of user-friendly automatic spine motion analysis systems. This paper describes an automatic analysis system to measure lumbar spine disorders that consists of a spine motion guidance device, an X-ray imaging modality to acquire digitized video fluoroscopy (DVF) sequences and an automated tracking module with a graphical user interface (GUI). DVF sequences of the lumbar spine are recorded during flexion-extension under a guidance device. The automatic tracking software utilizing a particle filter locates the vertebra-of-interest in every frame of the sequence, and the tracking result is displayed on the GUI. Kinematic parameters are also extracted from the tracking results for motion analysis. We observed that, in a bone model test, the maximum fiducial error was 3.7%, and the maximum repeatability error in translation and rotation was 1.2% and 2.6%, respectively. In our simulated DVF sequence study, the automatic tracking was not successful when the noise intensity was greater than 0.50. In a noisy situation, the maximal difference was 1.3 mm in translation and 1° in the rotation angle. The errors were calculated in translation (fiducial error: 2.4%, repeatability error: 0.5%) and in the rotation angle (fiducial error: 1.0%, repeatability error: 0.7%). However, the automatic tracking software could successfully track simulated sequences contaminated by noise at a density ≤ 0.5 with very high accuracy, providing good reliability and robustness. A clinical trial with 10 healthy subjects and 2 lumbar spondylolisthesis patients were enrolled in this study. The measurement with auto-tacking of DVF provided some information not seen in the conventional X-ray. The results proposed the potential use of the proposed system for clinical applications.

  6. KSC-08pd1902

    NASA Image and Video Library

    2008-07-02

    CAPE CANAVERAL, Fla. – A United Space Alliance technician (right) hands off a component of the Orion Crew Module mockup to one of the other technicians inside the mockup. The technicians wear motion capture suits. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup, which was created and built at the New York Institute of Technology by a team led by Prof. Peter Voci, MFA Director at the College of Arts and Sciences. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.

  7. Multi-object tracking of human spermatozoa

    NASA Astrophysics Data System (ADS)

    Sørensen, Lauge; Østergaard, Jakob; Johansen, Peter; de Bruijne, Marleen

    2008-03-01

    We propose a system for tracking of human spermatozoa in phase-contrast microscopy image sequences. One of the main aims of a computer-aided sperm analysis (CASA) system is to automatically assess sperm quality based on spermatozoa motility variables. In our case, the problem of assessing sperm quality is cast as a multi-object tracking problem, where the objects being tracked are the spermatozoa. The system combines a particle filter and Kalman filters for robust motion estimation of the spermatozoa tracks. Further, the combinatorial aspect of assigning observations to labels in the particle filter is formulated as a linear assignment problem solved using the Hungarian algorithm on a rectangular cost matrix, making the algorithm capable of handling missing or spurious observations. The costs are calculated using hidden Markov models that express the plausibility of an observation being the next position in the track history of the particle labels. Observations are extracted using a scale-space blob detector utilizing the fact that the spermatozoa appear as bright blobs in a phase-contrast microscope. The output of the system is the complete motion track of each of the spermatozoa. Based on these tracks, different CASA motility variables can be computed, for example curvilinear velocity or straight-line velocity. The performance of the system is tested on three different phase-contrast image sequences of varying complexity, both by visual inspection of the estimated spermatozoa tracks and by measuring the mean squared error (MSE) between the estimated spermatozoa tracks and manually annotated tracks, showing good agreement.

  8. Motion cue effects on human pilot dynamics in manual control

    NASA Technical Reports Server (NTRS)

    Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.

    1977-01-01

    Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.

  9. Advanced robot locomotion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neely, Jason C.; Sturgis, Beverly Rainwater; Byrne, Raymond Harry

    This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics naturalmore » human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.« less

  10. Real-time stylistic prediction for whole-body human motions.

    PubMed

    Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun

    2012-01-01

    The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Contrast, contours and the confusion effect in dazzle camouflage.

    PubMed

    Hogan, Benedict G; Scott-Samuel, Nicholas E; Cuthill, Innes C

    2016-07-01

    'Motion dazzle camouflage' is the name for the putative effects of highly conspicuous, often repetitive or complex, patterns on parameters important in prey capture, such as the perception of speed, direction and identity. Research into motion dazzle camouflage is increasing our understanding of the interactions between visual tracking, the confusion effect and defensive coloration. However, there is a paucity of research into the effects of contrast on motion dazzle camouflage: is maximal contrast a prerequisite for effectiveness? If not, this has important implications for our recognition of the phenotype and understanding of the function and mechanisms of potential motion dazzle camouflage patterns. Here we tested human participants' ability to track one moving target among many identical distractors with surface patterns designed to test the influence of these factors. In line with previous evidence, we found that targets with stripes parallel to the object direction of motion were hardest to track. However, reduction in contrast did not significantly influence this result. This finding may bring into question the utility of current definitions of motion dazzle camouflage, and means that some animal patterns, such as aposematic or mimetic stripes, may have previously unrecognized multiple functions.

  12. Gaze-contingent control for minimally invasive robotic surgery.

    PubMed

    Mylonas, George P; Darzi, Ara; Yang, Guang Zhong

    2006-09-01

    Recovering tissue depth and deformation during robotically assisted minimally invasive procedures is an important step towards motion compensation, stabilization and co-registration with preoperative data. This work demonstrates that eye gaze derived from binocular eye tracking can be effectively used to recover 3D motion and deformation of the soft tissue. A binocular eye-tracking device was integrated into the stereoscopic surgical console. After calibration, the 3D fixation point of the participating subjects could be accurately resolved in real time. A CT-scanned phantom heart model was used to demonstrate the accuracy of gaze-contingent depth extraction and motion stabilization of the soft tissue. The dynamic response of the oculomotor system was assessed with the proposed framework by using autoregressive modeling techniques. In vivo data were also used to perform gaze-contingent decoupling of cardiac and respiratory motion. Depth reconstruction, deformation tracking, and motion stabilization of the soft tissue were possible with binocular eye tracking. The dynamic response of the oculomotor system was able to cope with frequencies likely to occur under most routine minimally invasive surgical operations. The proposed framework presents a novel approach towards the tight integration of a human and a surgical robot where interaction in response to sensing is required to be under the control of the operating surgeon.

  13. Spatio-Temporal Constrained Human Trajectory Generation from the PIR Motion Detector Sensor Network Data: A Geometric Algebra Approach

    PubMed Central

    Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian

    2015-01-01

    Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks. PMID:26729123

  14. Spatio-Temporal Constrained Human Trajectory Generation from the PIR Motion Detector Sensor Network Data: A Geometric Algebra Approach.

    PubMed

    Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian

    2015-12-30

    Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks.

  15. Principal components of wrist circumduction from electromagnetic surgical tracking.

    PubMed

    Rasquinha, Brian J; Rainbow, Michael J; Zec, Michelle L; Pichora, David R; Ellis, Randy E

    2017-02-01

    An electromagnetic (EM) surgical tracking system was used for a functionally calibrated kinematic analysis of wrist motion. Circumduction motions were tested for differences in subject gender and for differences in the sense of the circumduction as clockwise or counter-clockwise motion. Twenty subjects were instrumented for EM tracking. Flexion-extension motion was used to identify the functional axis. Subjects performed unconstrained wrist circumduction in a clockwise and counter-clockwise sense. Data were decomposed into orthogonal flexion-extension motions and radial-ulnar deviation motions. PCA was used to concisely represent motions. Nonparametric Wilcoxon tests were used to distinguish the groups. Flexion-extension motions were projected onto a direction axis with a root-mean-square error of [Formula: see text]. Using the first three principal components, there was no statistically significant difference in gender (all [Formula: see text]). For motion sense, radial-ulnar deviation distinguished the sense of circumduction in the first principal component ([Formula: see text]) and in the third principal component ([Formula: see text]); flexion-extension distinguished the sense in the second principal component ([Formula: see text]). The clockwise sense of circumduction could be distinguished by a multifactorial combination of components; there were no gender differences in this small population. These data constitute a baseline for normal wrist circumduction. The multifactorial PCA findings suggest that a higher-dimensional method, such as manifold analysis, may be a more concise way of representing circumduction in human joints.

  16. Rapid Antibiotic Susceptibility Testing of Uropathogenic E. coli by Tracking Submicron Scale Motion of Single Bacterial Cells.

    PubMed

    Syal, Karan; Shen, Simon; Yang, Yunze; Wang, Shaopeng; Haydel, Shelley E; Tao, Nongjian

    2017-08-25

    To combat antibiotic resistance, a rapid antibiotic susceptibility testing (AST) technology that can identify resistant infections at disease onset is required. Current clinical AST technologies take 1-3 days, which is often too slow for accurate treatment. Here we demonstrate a rapid AST method by tracking sub-μm scale bacterial motion with an optical imaging and tracking technique. We apply the method to clinically relevant bacterial pathogens, Escherichia coli O157: H7 and uropathogenic E. coli (UPEC) loosely tethered to a glass surface. By analyzing dose-dependent sub-μm motion changes in a population of bacterial cells, we obtain the minimum bactericidal concentration within 2 h using human urine samples spiked with UPEC. We validate the AST method using the standard culture-based AST methods. In addition to population studies, the method allows single cell analysis, which can identify subpopulations of resistance strains within a sample.

  17. The Effectiveness of Simulator Motion in the Transfer of Performance on a Tracking Task Is Influenced by Vision and Motion Disturbance Cues.

    PubMed

    Grundy, John G; Nazar, Stefan; O'Malley, Shannon; Mohrenshildt, Martin V; Shedden, Judith M

    2016-06-01

    To examine the importance of platform motion to the transfer of performance in motion simulators. The importance of platform motion in simulators for pilot training is strongly debated. We hypothesized that the type of motion (e.g., disturbance) contributes significantly to performance differences. Participants used a joystick to perform a target tracking task in a pod on top of a MOOG Stewart motion platform. Five conditions compared training without motion, with correlated motion, with disturbance motion, with disturbance motion isolated to the visual display, and with both correlated and disturbance motion. The test condition involved the full motion model with both correlated and disturbance motion. We analyzed speed and accuracy across training and test as well as strategic differences in joystick control. Training with disturbance cues produced critical behavioral differences compared to training without disturbance; motion itself was less important. Incorporation of disturbance cues is a potentially important source of variance between studies that do or do not show a benefit of motion platforms in the transfer of performance in simulators. Potential applications of this research include the assessment of the importance of motion platforms in flight simulators, with a focus on the efficacy of incorporating disturbance cues during training. © 2016, Human Factors and Ergonomics Society.

  18. Motion tracing system for ultrasound guided HIFU

    NASA Astrophysics Data System (ADS)

    Xiao, Xu; Jiang, Tingyi; Corner, George; Huang, Zhihong

    2017-03-01

    One main limitation in HIFU treatment is the abdominal movement in liver and kidney caused by respiration. The study has set up a tracking model which mainly compromises of a target carrying box and a motion driving balloon. A real-time B-mode ultrasound guidance method suitable for tracking of the abdominal organ motion in 2D was established and tested. For the setup, the phantoms mimicking moving organs are carefully prepared with agar surrounding round-shaped egg-white as the target of focused ultrasound ablation. Physiological phantoms and animal tissues are driven moving reciprocally along the main axial direction of the ultrasound image probe with slightly motion perpendicular to the axial direction. The moving speed and range could be adjusted by controlling the inflation and deflation speed and amount of the balloon driven by a medical ventilator. A 6-DOF robotic arm was used to position the focused ultrasound transducer. The overall system was trying to estimate to simulate the actual movement caused by human respiration. HIFU ablation experiments using phantoms and animal organs were conducted to test the tracking effect. Ultrasound strain elastography was used to post estimate the efficiency of the tracking algorithms and system. In moving state, the axial size of the lesion (perpendicular to the movement direction) are averagely 4mm, which is one third larger than the lesion got when the target was not moving. This presents the possibility of developing a low-cost real-time method of tracking organ motion during HIFU treatment in liver or kidney.

  19. Human silhouette matching based on moment invariants

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi

    2005-07-01

    This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.

  20. Application of a novel Kalman filter based block matching method to ultrasound images for hand tendon displacement estimation.

    PubMed

    Lai, Ting-Yu; Chen, Hsiao-I; Shih, Cho-Chiang; Kuo, Li-Chieh; Hsu, Hsiu-Yun; Huang, Chih-Chung

    2016-01-01

    Information about tendon displacement is important for allowing clinicians to not only quantify preoperative tendon injuries but also to identify any adhesive scaring between tendon and adjacent tissue. The Fisher-Tippett (FT) similarity measure has recently been shown to be more accurate than the Laplacian sum of absolute differences (SAD) and Gaussian sum of squared differences (SSD) similarity measures for tracking tendon displacement in ultrasound B-mode images. However, all of these similarity measures can easily be influenced by the quality of the ultrasound image, particularly its signal-to-noise ratio. Ultrasound images of injured hands are unfortunately often of poor quality due to the presence of adhesive scars. The present study investigated a novel Kalman-filter scheme for overcoming this problem. Three state-of-the-art tracking methods (FT, SAD, and SSD) were used to track the displacements of phantom and cadaver tendons, while FT was used to track human tendons. These three tracking methods were combined individually with the proposed Kalman-filter (K1) scheme and another Kalman-filter scheme used in a previous study to optimize the displacement trajectories of the phantom and cadaver tendons. The motion of the human extensor digitorum communis tendon was measured in the present study using the FT-K1 scheme. The experimental results indicated that SSD exhibited better accuracy in the phantom experiments, whereas FT exhibited better performance for tracking real tendon motion in the cadaver experiments. All three tracking methods were influenced by the signal-to-noise ratio of the images. On the other hand, the K1 scheme was able to optimize the tracking trajectory of displacement in all experiments, even from a location with a poor image quality. The human experimental data indicated that the normal tendons were displaced more than the injured tendons, and that the motion ability of the injured tendon was restored after appropriate rehabilitation sessions. The obtained results show the potential for applying the proposed FT-K1 method in clinical applications for evaluating the tendon injury level after metacarpal fractures and assessing the recovery of an injured tendon during rehabilitation.

  1. Triboelectrification based motion sensor for human-machine interfacing.

    PubMed

    Yang, Weiqing; Chen, Jun; Wen, Xiaonan; Jing, Qingshen; Yang, Jin; Su, Yuanjie; Zhu, Guang; Wu, Wenzuo; Wang, Zhong Lin

    2014-05-28

    We present triboelectrification based, flexible, reusable, and skin-friendly dry biopotential electrode arrays as motion sensors for tracking muscle motion and human-machine interfacing (HMI). The independently addressable, self-powered sensor arrays have been utilized to record the electric output signals as a mapping figure to accurately identify the degrees of freedom as well as directions and magnitude of muscle motions. A fast Fourier transform (FFT) technique was employed to analyse the frequency spectra of the obtained electric signals and thus to determine the motion angular velocities. Moreover, the motion sensor arrays produced a short-circuit current density up to 10.71 mA/m(2), and an open-circuit voltage as high as 42.6 V with a remarkable signal-to-noise ratio up to 1000, which enables the devices as sensors to accurately record and transform the motions of the human joints, such as elbow, knee, heel, and even fingers, and thus renders it a superior and unique invention in the field of HMI.

  2. Feature-based respiratory motion tracking in native fluoroscopic sequences for dynamic roadmaps during minimally invasive procedures in the thorax and abdomen

    NASA Astrophysics Data System (ADS)

    Wagner, Martin G.; Laeseke, Paul F.; Schubert, Tilman; Slagowski, Jordan M.; Speidel, Michael A.; Mistretta, Charles A.

    2017-03-01

    Fluoroscopic image guidance for minimally invasive procedures in the thorax and abdomen suffers from respiratory and cardiac motion, which can cause severe subtraction artifacts and inaccurate image guidance. This work proposes novel techniques for respiratory motion tracking in native fluoroscopic images as well as a model based estimation of vessel deformation. This would allow compensation for respiratory motion during the procedure and therefore simplify the workflow for minimally invasive procedures such as liver embolization. The method first establishes dynamic motion models for both the contrast-enhanced vasculature and curvilinear background features based on a native (non-contrast) and a contrast-enhanced image sequence acquired prior to device manipulation, under free breathing conditions. The model of vascular motion is generated by applying the diffeomorphic demons algorithm to an automatic segmentation of the subtraction sequence. The model of curvilinear background features is based on feature tracking in the native sequence. The two models establish the relationship between the respiratory state, which is inferred from curvilinear background features, and the vascular morphology during that same respiratory state. During subsequent fluoroscopy, curvilinear feature detection is applied to determine the appropriate vessel mask to display. The result is a dynamic motioncompensated vessel mask superimposed on the fluoroscopic image. Quantitative evaluation of the proposed methods was performed using a digital 4D CT-phantom (XCAT), which provides realistic human anatomy including sophisticated respiratory and cardiac motion models. Four groups of datasets were generated, where different parameters (cycle length, maximum diaphragm motion and maximum chest expansion) were modified within each image sequence. Each group contains 4 datasets consisting of the initial native and contrast enhanced sequences as well as a sequence, where the respiratory motion is tracked. The respiratory motion tracking error was between 1.00 % and 1.09 %. The estimated dynamic vessel masks yielded a Sørensen-Dice coefficient between 0.94 and 0.96. Finally, the accuracy of the vessel contours was measured in terms of the 99th percentile of the error, which ranged between 0.64 and 0.96 mm. The presented results show that the approach is feasible for respiratory motion tracking and compensation and could therefore considerably improve the workflow of minimally invasive procedures in the thorax and abdomen

  3. Observation and analysis of high-speed human motion with frequent occlusion in a large area

    NASA Astrophysics Data System (ADS)

    Wang, Yuru; Liu, Jiafeng; Liu, Guojun; Tang, Xianglong; Liu, Peng

    2009-12-01

    The use of computer vision technology in collecting and analyzing statistics during sports matches or training sessions is expected to provide valuable information for tactics improvement. However, the measurements published in the literature so far are either unreliably documented to be used in training planning due to their limitations or unsuitable for studying high-speed motion in large area with frequent occlusions. A sports annotation system is introduced in this paper for tracking high-speed non-rigid human motion over a large playing area with the aid of motion camera, taking short track speed skating competitions as an example. The proposed system is composed of two sub-systems: precise camera motion compensation and accurate motion acquisition. In the video registration step, a distinctive invariant point feature detector (probability density grads detector) and a global parallax based matching points filter are used, to provide reliable and robust matching across a large range of affine distortion and illumination change. In the motion acquisition step, a two regions' relationship constrained joint color model and Markov chain Monte Carlo based joint particle filter are emphasized, by dividing the human body into two relative key regions. Several field tests are performed to assess measurement errors, including comparison to popular algorithms. With the help of the system presented, the system obtains position data on a 30 m × 60 m large rink with root-mean-square error better than 0.3975 m, velocity and acceleration data with absolute error better than 1.2579 m s-1 and 0.1494 m s-2, respectively.

  4. The 14th Annual Conference on Manual Control. [digital simulation of human operator dynamics

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Human operator dynamics during actual manual control or while monitoring the automatic control systems involved in air-to-air tracking, automobile driving, the operator of undersea vehicles, and remote handling are examined. Optimal control models and the use of mathematical theory in representing man behavior in complex man machine system tasks are discussed with emphasis on eye/head tracking and scanning; perception and attention allocation; decision making; and motion simulation and effects.

  5. Real-Time External Respiratory Motion Measuring Technique Using an RGB-D Camera and Principal Component Analysis †

    PubMed Central

    Wijenayake, Udaya; Park, Soon-Yong

    2017-01-01

    Accurate tracking and modeling of internal and external respiratory motion in the thoracic and abdominal regions of a human body is a highly discussed topic in external beam radiotherapy treatment. Errors in target/normal tissue delineation and dose calculation and the increment of the healthy tissues being exposed to high radiation doses are some of the unsolicited problems caused due to inaccurate tracking of the respiratory motion. Many related works have been introduced for respiratory motion modeling, but a majority of them highly depend on radiography/fluoroscopy imaging, wearable markers or surgical node implanting techniques. We, in this article, propose a new respiratory motion tracking approach by exploiting the advantages of an RGB-D camera. First, we create a patient-specific respiratory motion model using principal component analysis (PCA) removing the spatial and temporal noise of the input depth data. Then, this model is utilized for real-time external respiratory motion measurement with high accuracy. Additionally, we introduce a marker-based depth frame registration technique to limit the measuring area into an anatomically consistent region that helps to handle the patient movements during the treatment. We achieved a 0.97 correlation comparing to a spirometer and 0.53 mm average error considering a laser line scanning result as the ground truth. As future work, we will use this accurate measurement of external respiratory motion to generate a correlated motion model that describes the movements of internal tumors. PMID:28792468

  6. Control of joint motion simulators for biomechanical research

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.

    1992-01-01

    The authors present a hierarchical adaptive algorithm for controlling upper extremity human joint motion simulators. A joint motion simulator is a computer-controlled, electromechanical system which permits the application of forces to the tendons of a human cadaver specimen in such a way that the cadaver joint under study achieves a desired motion in a physiologic manner. The proposed control scheme does not require knowledge of the cadaver specimen dynamic model, and solves on-line the indeterminate problem which arises because human joints typically possess more actuators than degrees of freedom. Computer simulation results are given for an elbow/forearm system and wrist/hand system under hierarchical control. The results demonstrate that any desired normal joint motion can be accurately tracked with the proposed algorithm. These simulation results indicate that the controller resolved the indeterminate problem redundancy in a physiologic manner, and show that the control scheme was robust to parameter uncertainty and to sensor noise.

  7. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  8. A marker-free system for the analysis of movement disabilities.

    PubMed

    Legrand, L; Marzani, F; Dusserre, L

    1998-01-01

    A major step toward improving the treatments of disabled persons may be achieved by using motion analysis equipment. We are developing such a system. It allows the analysis of plane human motion (e.g. gait) without using the tracking of markers. The system is composed of one fixed camera which acquires an image sequence of a human in motion. Then the treatment is divided into two steps: first, a large number of pixels belonging to the boundaries of the human body are extracted at each acquisition time. Secondly, a two-dimensional model of the human body, based on tapered superquadrics, is successively matched with the sets of pixels previously extracted; a specific fuzzy clustering process is used for this purpose. Moreover, an optical flow procedure gives a prediction of the model location at each acquisition time from its location at the previous time. Finally we present some results of this process applied to a leg in motion.

  9. Inertial Motion Tracking for Inserting Humans into a Networked Synthetic Environment

    DTIC Science & Technology

    2007-08-31

    tracking methods. One method requires markers on the tracked buman body, and other method does not use nmkers. OPTOTRAK from Northem Digital Inc. is a...of using multicasting protocols. Unfortunately, most routers on the Internet are not configured for multicasting. A technique called tunneling is...used to overcome this problem. Tunneling is a software solution that m s on the end point routerslcomputers and allows multicast packets to traverse

  10. Human Centered Hardware Modeling and Collaboration

    NASA Technical Reports Server (NTRS)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  11. Tracking scanning laser ophthalmoscope (TSLO)

    NASA Astrophysics Data System (ADS)

    Hammer, Daniel X.; Ferguson, R. Daniel; Magill, John C.; White, Michael A.; Elsner, Ann E.; Webb, Robert H.

    2003-07-01

    The effectiveness of image stabilization with a retinal tracker in a multi-function, compact scanning laser ophthalmoscope (TSLO) was demonstrated in initial human subject tests. The retinal tracking system uses a confocal reflectometer with a closed loop optical servo system to lock onto features in the fundus. The system is modular to allow configuration for many research and clinical applications, including hyperspectral imaging, multifocal electroretinography (MFERG), perimetry, quantification of macular and photo-pigmentation, imaging of neovascularization and other subretinal structures (drusen, hyper-, and hypo-pigmentation), and endogenous fluorescence imaging. Optical hardware features include dual wavelength imaging and detection, integrated monochromator, higher-order motion control, and a stimulus source. The system software consists of a real-time feedback control algorithm and a user interface. Software enhancements include automatic bias correction, asymmetric feature tracking, image averaging, automatic track re-lock, and acquisition and logging of uncompressed images and video files. Normal adult subjects were tested without mydriasis to optimize the tracking instrumentation and to characterize imaging performance. The retinal tracking system achieves a bandwidth of greater than 1 kHz, which permits tracking at rates that greatly exceed the maximum rate of motion of the human eye. The TSLO stabilized images in all test subjects during ordinary saccades up to 500 deg/sec with an inter-frame accuracy better than 0.05 deg. Feature lock was maintained for minutes despite subject eye blinking. Successful frame averaging allowed image acquisition with decreased noise in low-light applications. The retinal tracking system significantly enhances the imaging capabilities of the scanning laser ophthalmoscope.

  12. Real-time marker-free motion capture system using blob feature analysis

    NASA Astrophysics Data System (ADS)

    Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho

    2005-02-01

    This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.

  13. A stochastic approach to noise modeling for barometric altimeters.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2013-11-18

    The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.

  14. Human Kinematics of Cochlear Implant Surgery: An Investigation of Insertion Micro-Motions and Speed Limitations.

    PubMed

    Kesler, Kyle; Dillon, Neal P; Fichera, Loris; Labadie, Robert F

    2017-09-01

    Objectives Document human motions associated with cochlear implant electrode insertion at different speeds and determine the lower limit of continuous insertion speed by a human. Study Design Observational. Setting Academic medical center. Subjects and Methods Cochlear implant forceps were coupled to a frame containing reflective fiducials, which enabled optical tracking of the forceps' tip position in real time. Otolaryngologists (n = 14) performed mock electrode insertions at different speeds based on recommendations from the literature: "fast" (96 mm/min), "stable" (as slow as possible without stopping), and "slow" (15 mm/min). For each insertion, the following metrics were calculated from the tracked position data: percentage of time at prescribed speed, percentage of time the surgeon stopped moving forward, and number of direction reversals (ie, going from forward to backward motion). Results Fast insertion trials resulted in better adherence to the prescribed speed (45.4% of the overall time), no motion interruptions, and no reversals, as compared with slow insertions (18.6% of time at prescribed speed, 15.7% stopped time, and an average of 18.6 reversals per trial). These differences were statistically significant for all metrics ( P < .01). The metrics for the fast and stable insertions were comparable; however, stable insertions were performed 44% slower on average. The mean stable insertion speed was 52 ± 19.3 mm/min. Conclusion Results indicate that continuous insertion of a cochlear implant electrode at 15 mm/min is not feasible for human operators. The lower limit of continuous forward insertion is 52 mm/min on average. Guidelines on manual insertion kinematics should consider this practical limit of human motion.

  15. Dual Use of Image Based Tracking Techniques: Laser Eye Surgery and Low Vision Prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Barton, R. Shane

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  16. Dual use of image based tracking techniques: Laser eye surgery and low vision prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  17. Real time markerless motion tracking using linked kinematic chains

    DOEpatents

    Luck, Jason P [Arvada, CO; Small, Daniel E [Albuquerque, NM

    2007-08-14

    A markerless method is described for tracking the motion of subjects in a three dimensional environment using a model based on linked kinematic chains. The invention is suitable for tracking robotic, animal or human subjects in real-time using a single computer with inexpensive video equipment, and does not require the use of markers or specialized clothing. A simple model of rigid linked segments is constructed of the subject and tracked using three dimensional volumetric data collected by a multiple camera video imaging system. A physics based method is then used to compute forces to align the model with subsequent volumetric data sets in real-time. The method is able to handle occlusion of segments and accommodates joint limits, velocity constraints, and collision constraints and provides for error recovery. The method further provides for elimination of singularities in Jacobian based calculations, which has been problematic in alternative methods.

  18. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  19. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  20. KSC-08pd1899

    NASA Image and Video Library

    2008-07-02

    CAPE CANAVERAL, Fla. – NYIT MOCAP (Motion Capture) team Project Manager Jon Squitieri attaches a retro reflective marker to a motion capture suit worn by a technician who will be assembling the Orion Crew Module mockup. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.

  1. Motion analysis report

    NASA Technical Reports Server (NTRS)

    Badler, N. I.

    1985-01-01

    Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.

  2. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  3. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  4. Electromagnetic guided couch and multileaf collimator tracking on a TrueBeam accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Rune; Ravkilde, Thomas; Worm, Esben Schjødt

    2016-05-15

    Purpose: Couch and MLC tracking are two promising methods for real-time motion compensation during radiation therapy. So far, couch and MLC tracking experiments have mainly been performed by different research groups, and no direct comparison of couch and MLC tracking of volumetric modulated arc therapy (VMAT) plans has been published. The Varian TrueBeam 2.0 accelerator includes a prototype tracking system with selectable couch or MLC compensation. This study provides a direct comparison of the two tracking types with an otherwise identical setup. Methods: Several experiments were performed to characterize the geometric and dosimetric performance of electromagnetic guided couch and MLCmore » tracking on a TrueBeam accelerator equipped with a Millennium MLC. The tracking system latency was determined without motion prediction as the time lag between sinusoidal target motion and the compensating motion of the couch or MLC as recorded by continuous MV portal imaging. The geometric and dosimetric tracking accuracies were measured in tracking experiments with motion phantoms that reproduced four prostate and four lung tumor trajectories. The geometric tracking error in beam’s eye view was determined as the distance between an embedded gold marker and a circular MLC aperture in continuous MV images. The dosimetric tracking error was quantified as the measured 2%/2 mm gamma failure rate of a low and a high modulation VMAT plan delivered with the eight motion trajectories using a static dose distribution as reference. Results: The MLC tracking latency was approximately 146 ms for all sinusoidal period lengths while the couch tracking latency increased from 187 to 246 ms with decreasing period length due to limitations in the couch acceleration. The mean root-mean-square geometric error was 0.80 mm (couch tracking), 0.52 mm (MLC tracking), and 2.75 mm (no tracking) parallel to the MLC leaves and 0.66 mm (couch), 1.14 mm (MLC), and 2.41 mm (no tracking) perpendicular to the leaves. The motion-induced gamma failure rate was in mean 0.1% (couch tracking), 8.1% (MLC tracking), and 30.4% (no tracking) for prostate motion and 2.9% (couch), 2.4% (MLC), and 41.2% (no tracking) for lung tumor motion. The residual tracking errors were mainly caused by inadequate adaptation to fast lung tumor motion for couch tracking and to prostate motion perpendicular to the MLC leaves for MLC tracking. Conclusions: Couch and MLC tracking markedly improved the geometric and dosimetric accuracies of VMAT delivery. However, the two tracking types have different strengths and weaknesses. While couch tracking can correct perfectly for slowly moving targets such as the prostate, MLC tracking may have considerably larger dose errors for persistent target shift perpendicular to the MLC leaves. Advantages of MLC tracking include faster dynamics with better adaptation to fast moving targets, the avoidance of moving the patient, and the potential to track target rotations and deformations.« less

  5. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.

  6. The effect of concurrent hand movement on estimated time to contact in a prediction motion task.

    PubMed

    Zheng, Ran; Maraj, Brian K V

    2018-04-27

    In many activities, we need to predict the arrival of an occluded object. This action is called prediction motion or motion extrapolation. Previous researchers have found that both eye tracking and the internal clocking model are involved in the prediction motion task. Additionally, it is reported that concurrent hand movement facilitates the eye tracking of an externally generated target in a tracking task, even if the target is occluded. The present study examined the effect of concurrent hand movement on the estimated time to contact in a prediction motion task. We found different (accurate/inaccurate) concurrent hand movements had the opposite effect on the eye tracking accuracy and estimated TTC in the prediction motion task. That is, the accurate concurrent hand tracking enhanced eye tracking accuracy and had the trend to increase the precision of estimated TTC, but the inaccurate concurrent hand tracking decreased eye tracking accuracy and disrupted estimated TTC. However, eye tracking accuracy does not determine the precision of estimated TTC.

  7. Design of a multimodal (1H/23Na MR/CT) anthropomorphic thorax phantom.

    PubMed

    Neumann, Wiebke; Lietzmann, Florian; Schad, Lothar R; Zöllner, Frank G

    2017-06-01

    This work proposes a modular, anthropomorphic MR and CT thorax phantom that enables the comparison of experimental studies for quantitative evaluation of deformable, multimodal image registration algorithms and realistic multi-nuclear MR imaging techniques. A human thorax phantom was developed with insertable modules representing lung, liver, ribs and additional tracking spheres. The quality of human tissue mimicking characteristics was evaluated for 1 H and 23 Na MR as well as CT imaging. The position of landmarks in the lung lobes was tracked during CT image acquisition at several positions during breathing cycles. 1 H MR measurements of the liver were repeated after seven months to determine long term stability. The modules possess HU, T 1 and T 2 values comparable to human tissues (lung module: -756±148HU, artificial ribs: 218±56HU (low CaCO 3 concentration) and 339±121 (high CaCO 3 concentration), liver module: T 1 =790±28ms, T 2 =65±1ms). Motion analysis showed that the landmarks in the lung lobes follow a 3D trajectory similar to human breathing motion. The tracking spheres are well detectable in both CT and MRI. The parameters of the tracking spheres can be adjusted in the following ranges to result in a distinct signal: HU values from 150 to 900HU, T 1 relaxation time from 550ms to 2000ms, T 2 relaxation time from 40ms to 200ms. The presented anthropomorphic multimodal thorax phantom fulfills the demands of a simple, inexpensive system with interchangeable components. In future, the modular design allows for complementing the present set up with additional modules focusing on specific research targets such as perfusion studies, 23 Na MR quantification experiments and an increasing level of complexity for motion studies. Copyright © 2016. Published by Elsevier GmbH.

  8. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  9. How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking

    PubMed Central

    Thomas, Laura E.; Seiffert, Adriane E.

    2011-01-01

    Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259

  10. A general-purpose framework to simulate musculoskeletal system of human body: using a motion tracking approach.

    PubMed

    Ehsani, Hossein; Rostami, Mostafa; Gudarzi, Mohammad

    2016-02-01

    Computation of muscle force patterns that produce specified movements of muscle-actuated dynamic models is an important and challenging problem. This problem is an undetermined one, and then a proper optimization is required to calculate muscle forces. The purpose of this paper is to develop a general model for calculating all muscle activation and force patterns in an arbitrary human body movement. For this aim, the equations of a multibody system forward dynamics, which is considered for skeletal system of the human body model, is derived using Lagrange-Euler formulation. Next, muscle contraction dynamics is added to this model and forward dynamics of an arbitrary musculoskeletal system is obtained. For optimization purpose, the obtained model is used in computed muscle control algorithm, and a closed-loop system for tracking desired motions is derived. Finally, a popular sport exercise, biceps curl, is simulated by using this algorithm and the validity of the obtained results is evaluated via EMG signals.

  11. KSC-08pd1900

    NASA Image and Video Library

    2008-07-02

    CAPE CANAVERAL, Fla. –David Voci, NYIT MOCAP (Motion Capture) team co-director (seated at the workstation in the background) prepares to direct a motion capture session assisted by Kennedy Advanced Visualizations Environment staff led by Brad Lawrence (not pictured) and by Lora Ridgwell from United Space Alliance Human Factors (foreground, left). Ridgwell will help assemble the Orion Crew Module mockup. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.

  12. Robust human detection, tracking, and recognition in crowded urban areas

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.

  13. Model-based control strategies for systems with constraints of the program type

    NASA Astrophysics Data System (ADS)

    Jarzębowska, Elżbieta

    2006-08-01

    The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.

  14. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  15. Lumbar joint torque estimation based on simplified motion measurement using multiple inertial sensors.

    PubMed

    Miyajima, Saori; Tanaka, Takayuki; Imamura, Yumeko; Kusaka, Takashi

    2015-01-01

    We estimate lumbar torque based on motion measurement using only three inertial sensors. First, human motion is measured by a 6-axis motion tracking device that combines a 3-axis accelerometer and a 3-axis gyroscope placed on the shank, thigh, and back. Next, the lumbar joint torque during the motion is estimated by kinematic musculoskeletal simulation. The conventional method for estimating joint torque uses full body motion data measured by an optical motion capture system. However, in this research, joint torque is estimated by using only three link angles of the body, thigh, and shank. The utility of our method was verified by experiments. We measured motion of bendung knee and waist simultaneously. As the result, we were able to estimate the lumbar joint torque from measured motion.

  16. Flower tracking in hawkmoths: behavior and energetics.

    PubMed

    Sprayberry, Jordanna D H; Daniel, Thomas L

    2007-01-01

    As hovering feeders, hawkmoths cope with flower motions by tracking those motions to maintain contact with the nectary. This study examined the tracking, feeding and energetic performance of Manduca sexta feeding from flowers moving at varied frequencies and in different directions. In general we found that tracking performance decreased as frequency increased; M. sexta tracked flowers moving at 1 Hz best. While feeding rates were highest for stationary flowers, they remained relatively constant for all tested frequencies of flower motion. Calculations of net energy gain showed that energy expenditure to track flowers is minimal compared to energy intake; therefore, patterns of net energy gain mimicked patterns of feeding rate. The direction effects of flower motion were greater than the frequency effects. While M. sexta appeared equally capable of tracking flowers moving in the horizontal and vertical motion axes, they demonstrated poor ability to track flowers moving in the looming axis. Additionally, both feeding rates and net energy gain were lower for looming axis flower motions.

  17. Representational Momentum for the Human Body: Awkwardness Matters, Experience Does Not

    ERIC Educational Resources Information Center

    Wilson, Margaret; Lancaster, Jessy; Emmorey, Karen

    2010-01-01

    Perception of the human body appears to involve predictive simulations that project forward to track unfolding body-motion events. Here we use representational momentum (RM) to investigate whether implicit knowledge of a learned arbitrary system of body movement such as sign language influences this prediction process, and how this compares to…

  18. Cable-driven elastic parallel humanoid head with face tracking for Autism Spectrum Disorder interventions.

    PubMed

    Su, Hao; Dickstein-Fischer, Laurie; Harrington, Kevin; Fu, Qiushi; Lu, Weina; Huang, Haibo; Cole, Gregory; Fischer, Gregory S

    2010-01-01

    This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct "ball-in-socket" structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.

  19. Effectiveness of an automatic tracking software in underwater motion analysis.

    PubMed

    Magalhaes, Fabrício A; Sawacha, Zimi; Di Michele, Rocco; Cortesi, Matteo; Gatta, Giorgio; Fantozzi, Silvia

    2013-01-01

    Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP), based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers' positions) were manually tracked to determine the markers' center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM). Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker's coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4%) than for COM (17.8%). Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis. Key PointsThe availability of effective software for automatic tracking would represent a significant advance for the practical use of kinematic analysis in swimming and other aquatic sports.An important feature of automatic tracking software is to require limited human interventions and supervision, thus allowing short processing time.When tracking underwater movements, the degree of automation of the tracking procedure is influenced by the capability of the algorithm to overcome difficulties linked to the small target size, the low image quality and the presence of background clutters.The newly developed feature-tracking algorithm has shown a good automatic tracking effectiveness in underwater motion analysis with significantly smaller percentage of required manual interventions when compared to a commercial software.

  20. A soft biomimetic tongue: model reconstruction and motion tracking

    NASA Astrophysics Data System (ADS)

    Lu, Xuanming; Xu, Weiliang; Li, Xiaoning

    2016-04-01

    A bioinspired robotic tongue which is actuated by a network of compressed air is proposed for the purpose of mimicking the movements of human tongue. It can be applied in the fields such as medical science and food engineering. The robotic tongue is made of two kinds of silicone rubber Ecoflex 0030 and PDMS with the shape simplified from real human tongue. In order to characterize the robotic tongue, a series of experiments were carried out. Laser scan was applied to reconstruct the static model of robotic tongue when it was under pressurization. After each scan, the robotic tongue was scattered into dense points in the same 3D coordinate system and the coordinates of each point were recorded. Motion tracking system (OptiTrack) was used to track and record the whole process of deformation dynamically during the loading and unloading phase. In the experiments, five types of deformation were achieved including roll-up, roll-down, elongation, groove and twist. Utilizing the discrete points generated by laser scan, the accurate parameterized outline of robotic tongue under different pressure was obtained, which could help demonstrate the static characteristic of robotic tongue. The precise deformation process under one pressure was acquired through the OptiTrack system which contains a series of digital cameras, markers on the robotic tongue and a set of hardware and software for data processing. By means of tracking and recording different process of deformation under different pressure, the dynamic characteristic of robotic tongue could be achieved.

  1. Microsoft Kinect Sensor Evaluation

    NASA Technical Reports Server (NTRS)

    Billie, Glennoah

    2011-01-01

    My summer project evaluates the Kinect game sensor input/output and its suitability to perform as part of a human interface for a spacecraft application. The primary objective is to evaluate, understand, and communicate the Kinect system's ability to sense and track fine (human) position and motion. The project will analyze the performance characteristics and capabilities of this game system hardware and its applicability for gross and fine motion tracking. The software development kit for the Kinect was also investigated and some experimentation has begun to understand its development environment. To better understand the software development of the Kinect game sensor, research in hacking communities has brought a better understanding of the potential for a wide range of personal computer (PC) application development. The project also entails the disassembly of the Kinect game sensor. This analysis would involve disassembling a sensor, photographing it, and identifying components and describing its operation.

  2. Lightweight biometric detection system for human classification using pyroelectric infrared detectors.

    PubMed

    Burchett, John; Shankar, Mohan; Hamza, A Ben; Guenther, Bob D; Pitsianis, Nikos; Brady, David J

    2006-05-01

    We use pyroelectric detectors that are differential in nature to detect motion in humans by their heat emissions. Coded Fresnel lens arrays create boundaries that help to localize humans in space as well as to classify the nature of their motion. We design and implement a low-cost biometric tracking system by using off-the-shelf components. We demonstrate two classification methods by using data gathered from sensor clusters of dual-element pyroelectric detectors with coded Fresnel lens arrays. We propose two algorithms for person identification, a more generalized spectral clustering method and a more rigorous example that uses principal component regression to perform a blind classification.

  3. Development of an in vitro diaphragm motion reproduction system.

    PubMed

    Liao, Ai-Ho; Chuang, Ho-Chiao; Shih, Ming-Chih; Hsu, Hsiao-Yu; Tien, Der-Chi; Kuo, Chia-Chun; Jeng, Shiu-Chen; Chiou, Jeng-Fong

    2017-07-01

    This study developed an in vitro diaphragm motion reproduction system (IVDMRS) based on noninvasive and real-time ultrasound imaging to track the internal displacement of the human diaphragm and diaphragm phantoms with a respiration simulation system (RSS). An ultrasound image tracking algorithm (UITA) was used to retrieve the displacement data of the tracking target and reproduce the diaphragm motion in real time using a red laser to irradiate the diaphragm phantom in vitro. This study also recorded the respiration patterns in 10 volunteers. Both simulated and the respiration patterns in 10 human volunteers signals were input to the RSS for conducting experiments involving the reproduction of diaphragm motion in vitro using the IVDMRS. The reproduction accuracy of the IVDMRS was calculated and analyzed. The results indicate that the respiration frequency substantially affects the correlation between ultrasound and kV images, as well as the reproduction accuracy of the IVDMRS due to the system delay time (0.35s) of ultrasound imaging and signal transmission. The utilization of a phase lead compensator (PLC) reduced the error caused by this delay, thereby improving the reproduction accuracy of the IVDMRS by 14.09-46.98%. Applying the IVDMRS in clinical treatments will allow medical staff to monitor the target displacements in real time by observing the movement of the laser beam. If the target displacement moves outside the planning target volume (PTV), the treatment can be immediately stopped to ensure that healthy tissues do not receive high doses of radiation. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Receptive fields for smooth pursuit eye movements and motion perception.

    PubMed

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. 3D Tracking of individual growth factor receptors on polarized cells

    NASA Astrophysics Data System (ADS)

    Werner, James; Stich, Dominik; Cleyrat, Cedric; Phipps, Mary; Wadinger-Ness, Angela; Wilson, Bridget

    We have been developing methods for following 3D motion of selected biomolecular species throughout mammalian cells. Our approach exploits a custom designed confocal microscope that uses a unique spatial filter geometry and active feedback 200 times/second to follow fast 3D motion. By exploiting new non-blinking quantum dots as fluorescence labels, individual molecular trajectories can be observed for several minutes. We also will discuss recent instrument upgrades, including the ability to perform spinning disk fluorescence microscopy on the whole mammalian cell performed simultaneously with 3D molecular tracking experiments. These instrument upgrades were used to quantify 3D heterogeneous transport of individual growth factor receptors (EGFR) on live human renal cortical epithelial cells.

  6. Which way and how far? Tracking of translation and rotation information for human path integration.

    PubMed

    Chrastil, Elizabeth R; Sherrill, Katherine R; Hasselmo, Michael E; Stern, Chantal E

    2016-10-01

    Path integration, the constant updating of the navigator's knowledge of position and orientation during movement, requires both visuospatial knowledge and memory. This study aimed to develop a systems-level understanding of human path integration by examining the basic building blocks of path integration in humans. To achieve this goal, we used functional imaging to examine the neural mechanisms that support the tracking and memory of translational and rotational components of human path integration. Critically, and in contrast to previous studies, we examined movement in translation and rotation tasks with no defined end-point or goal. Navigators accumulated translational and rotational information during virtual self-motion. Activity in hippocampus, retrosplenial cortex (RSC), and parahippocampal cortex (PHC) increased during both translation and rotation encoding, suggesting that these regions track self-motion information during path integration. These results address current questions regarding distance coding in the human brain. By implementing a modified delayed match to sample paradigm, we also examined the encoding and maintenance of path integration signals in working memory. Hippocampus, PHC, and RSC were recruited during successful encoding and maintenance of path integration information, with RSC selective for tasks that required processing heading rotation changes. These data indicate distinct working memory mechanisms for translation and rotation, which are essential for updating neural representations of current location. The results provide evidence that hippocampus, PHC, and RSC flexibly track task-relevant translation and rotation signals for path integration and could form the hub of a more distributed network supporting spatial navigation. Hum Brain Mapp 37:3636-3655, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Human motion behavior while interacting with an industrial robot.

    PubMed

    Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus

    2012-01-01

    Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.

  8. Imaging and tracking HIV viruses in human cervical mucus

    NASA Astrophysics Data System (ADS)

    Boukari, Fatima; Makrogiannis, Sokratis; Nossal, Ralph; Boukari, Hacène

    2016-09-01

    We describe a systematic approach to image, track, and quantify the movements of HIV viruses embedded in human cervical mucus. The underlying motivation for this study is that, in HIV-infected adults, women account for more than half of all new cases and most of these women acquire the infection through heterosexual contact. The endocervix is believed to be a susceptible site for HIV entry. Cervical mucus, which coats the endocervix, should play a protective role against the viruses. Thus, we developed a methodology to apply time-resolved confocal microscopy to examine the motion of HIV viruses that were added to samples of untreated cervical mucus. From the images, we identified the viruses, tracked them over time, and calculated changes of the statistical mean-squared displacement (MSD) of each virus. Approximately half of tracked viruses appear constrained while the others show mobility with MSDs that are proportional to τα+ν2τ2, over time range τ, depicting a combination of anomalous diffusion (0<α<0.4) and flow-like behavior. The MSD data also reveal plateaus attributable to possible stalling of the viruses. Although a more extensive study is warranted, these results support the assumption of mucus being a barrier against the motion of these viruses.

  9. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  10. Correlation between external and internal respiratory motion: a validation study.

    PubMed

    Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-05-01

    In motion-compensated image-guided radiotherapy, accurate tracking of the target region is required. This tracking process includes building a correlation model between external surrogate motion and the motion of the target region. A novel correlation method is presented and compared with the commonly used polynomial model. The CyberKnife system (Accuray, Inc., Sunnyvale/CA) uses a polynomial correlation model to relate externally measured surrogate data (optical fibres on the patient's chest emitting red light) to infrequently acquired internal measurements (X-ray data). A new correlation algorithm based on ɛ -Support Vector Regression (SVR) was developed. Validation and comparison testing were done with human volunteers using live 3D ultrasound and externally measured infrared light-emitting diodes (IR LEDs). Seven data sets (5:03-6:27 min long) were recorded from six volunteers. Polynomial correlation algorithms were compared to the SVR-based algorithm demonstrating an average increase in root mean square (RMS) accuracy of 21.3% (0.4 mm). For three signals, the increase was more than 29% and for one signal as much as 45.6% (corresponding to more than 1.5 mm RMS). Further analysis showed the improvement to be statistically significant. The new SVR-based correlation method outperforms traditional polynomial correlation methods for motion tracking. This method is suitable for clinical implementation and may improve the overall accuracy of targeted radiotherapy.

  11. Tracking and characterizing the head motion of unanaesthetized rats in positron emission tomography

    PubMed Central

    Kyme, Andre; Meikle, Steven; Baldock, Clive; Fulton, Roger

    2012-01-01

    Positron emission tomography (PET) is an important in vivo molecular imaging technique for translational research. Imaging unanaesthetized rats using motion-compensated PET avoids the confounding impact of anaesthetic drugs and enables animals to be imaged during normal or evoked behaviour. However, there is little published data on the nature of rat head motion to inform the design of suitable marker-based motion-tracking set-ups for brain imaging—specifically, set-ups that afford close to uninterrupted tracking. We performed a systematic study of rat head motion parameters for unanaesthetized tube-bound and freely moving rats with a view to designing suitable motion-tracking set-ups in each case. For tube-bound rats, using a single appropriately placed binocular tracker, uninterrupted tracking was possible greater than 95 per cent of the time. For freely moving rats, simulations and measurements of a live subject indicated that two opposed binocular trackers are sufficient (less than 10% interruption to tracking) for a wide variety of behaviour types. We conclude that reliable tracking of head pose can be achieved with marker-based optical-motion-tracking systems for both tube-bound and freely moving rats undergoing PET studies without sedation. PMID:22718992

  12. Temporally diffeomorphic cardiac motion estimation from three-dimensional echocardiography by minimization of intensity consistency error.

    PubMed

    Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo

    2014-05-01

    Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.

  13. Evaluation of the clinical efficacy of the PeTrack motion tracking system for respiratory gating in cardiac PET imaging

    NASA Astrophysics Data System (ADS)

    Manwell, Spencer; Chamberland, Marc J. P.; Klein, Ran; Xu, Tong; deKemp, Robert

    2017-03-01

    Respiratory gating is a common technique used to compensate for patient breathing motion and decrease the prevalence of image artifacts that can impact diagnoses. In this study a new data-driven respiratory gating method (PeTrack) was compared with a conventional optical tracking system. The performance of respiratory gating of the two systems was evaluated by comparing the number of respiratory triggers, patient breathing intervals and gross heart motion as measured in the respiratory-gated image reconstructions of rubidium-82 cardiac PET scans in test and control groups consisting of 15 and 8 scans, respectively. We found evidence suggesting that PeTrack is a robust patient motion tracking system that can be used to retrospectively assess patient motion in the event of failure of the conventional optical tracking system.

  14. Structure preserving clustering-object tracking via subgroup motion pattern segmentation

    NASA Astrophysics Data System (ADS)

    Fan, Zheyi; Zhu, Yixuan; Jiang, Jiao; Weng, Shuqin; Liu, Zhiwen

    2018-01-01

    Tracking clustering objects with similar appearances simultaneously in collective scenes is a challenging task in the field of collective motion analysis. Recent work on clustering-object tracking often suffers from poor tracking accuracy and terrible real-time performance due to the neglect or the misjudgment of the motion differences among objects. To address this problem, we propose a subgroup motion pattern segmentation framework based on a multilayer clustering structure and establish spatial constraints only among objects in the same subgroup, which entails having consistent motion direction and close spatial position. In addition, the subgroup segmentation results are updated dynamically because crowd motion patterns are changeable and affected by objects' destinations and scene structures. The spatial structure information combined with the appearance similarity information is used in the structure preserving object tracking framework to track objects. Extensive experiments conducted on several datasets containing multiple real-world crowd scenes validate the accuracy and the robustness of the presented algorithm for tracking objects in collective scenes.

  15. Suppression of Biodynamic Interference by Adaptive Filtering

    NASA Technical Reports Server (NTRS)

    Velger, M.; Merhav, S. J.; Grunwald, A. J.

    1984-01-01

    Preliminary experimental results obtained in moving base simulator tests are presented. Both for pursuit and compensatory tracking tasks, a strong deterioration in tracking performance due to biodynamic interference is found. The use of adaptive filtering is shown to substantially alleviate these effects, resulting in a markedly improved tracking performance and reduction in task difficulty. The effect of simulator motion and of adaptive filtering on human operator describing functions is investigated. Adaptive filtering is found to substantially increase pilot gain and cross-over frequency, implying a more tight tracking behavior. The adaptive filter is found to be effective in particular for high-gain proportional dynamics, low display forcing function power and for pursuit tracking task configurations.

  16. MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner.

    PubMed

    Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B; Michel, Christian J; El Fakhri, Georges; Schmand, Matthias; Sorensen, A Gregory

    2011-01-01

    Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MRI data can be used for motion tracking. In this work, a novel algorithm for data processing and rigid-body motion correction (MC) for the MRI-compatible BrainPET prototype scanner is described, and proof-of-principle phantom and human studies are presented. To account for motion, the PET prompt and random coincidences and sensitivity data for postnormalization were processed in the line-of-response (LOR) space according to the MRI-derived motion estimates. The processing time on the standard BrainPET workstation is approximately 16 s for each motion estimate. After rebinning in the sinogram space, the motion corrected data were summed, and the PET volume was reconstructed using the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed, and motion estimates were obtained using 2 high-temporal-resolution MRI-based motion-tracking techniques. After accounting for the misalignment between the 2 scanners, perfectly coregistered MRI and PET volumes were reproducibly obtained. The MRI output gates inserted into the PET list-mode allow the temporal correlation of the 2 datasets within 0.2 ms. The Hoffman phantom volume reconstructed by processing the PET data in the LOR space was similar to the one obtained by processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the procedure. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 s and 20 ms, respectively. Motion-deblurred PET images, with excellent delineation of specific brain structures, were obtained using these 2 MRI-based estimates. An MRI-based MC algorithm was implemented for an integrated MR-PET scanner. High-temporal-resolution MRI-derived motion estimates (obtained while simultaneously acquiring anatomic or functional MRI data) can be used for PET MC. An MRI-based MC method has the potential to improve PET image quality, increasing its reliability, reproducibility, and quantitative accuracy, and to benefit many neurologic applications.

  17. Cross-Modal Attention Effects in the Vestibular Cortex during Attentive Tracking of Moving Objects.

    PubMed

    Frank, Sebastian M; Sun, Liwei; Forster, Lisa; Tse, Peter U; Greenlee, Mark W

    2016-12-14

    The midposterior fundus of the Sylvian fissure in the human brain is central to the cortical processing of vestibular cues. At least two vestibular areas are located at this site: the parietoinsular vestibular cortex (PIVC) and the posterior insular cortex (PIC). It is now well established that activity in sensory systems is subject to cross-modal attention effects. Attending to a stimulus in one sensory modality enhances activity in the corresponding cortical sensory system, but simultaneously suppresses activity in other sensory systems. Here, we wanted to probe whether such cross-modal attention effects also target the vestibular system. To this end, we used a visual multiple-object tracking task. By parametrically varying the number of tracked targets, we could measure the effect of attentional load on the PIVC and the PIC while holding the perceptual load constant. Participants performed the tracking task during functional magnetic resonance imaging. Results show that, compared with passive viewing of object motion, activity during object tracking was suppressed in the PIVC and enhanced in the PIC. Greater attentional load, induced by increasing the number of tracked targets, was associated with a corresponding increase in the suppression of activity in the PIVC. Activity in the anterior part of the PIC decreased with increasing load, whereas load effects were absent in the posterior PIC. Results of a control experiment show that attention-induced suppression in the PIVC is stronger than any suppression evoked by the visual stimulus per se. Overall, our results suggest that attention has a cross-modal modulatory effect on the vestibular cortex during visual object tracking. In this study we investigate cross-modal attention effects in the human vestibular cortex. We applied the visual multiple-object tracking task because it is known to evoke attentional load effects on neural activity in visual motion-processing and attention-processing areas. Here we demonstrate a load-dependent effect of attention on the activation in the vestibular cortex, despite constant visual motion stimulation. We find that activity in the parietoinsular vestibular cortex is more strongly suppressed the greater the attentional load on the visual tracking task. These findings suggest cross-modal attentional modulation in the vestibular cortex. Copyright © 2016 the authors 0270-6474/16/3612720-09$15.00/0.

  18. Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera

    NASA Astrophysics Data System (ADS)

    Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.

    2004-01-01

    We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.

  19. Magnetic Resonance Imaging–Guided versus Surrogate-Based Motion Tracking in Liver Radiation Therapy: A Prospective Comparative Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paganelli, Chiara, E-mail: chiara.paganelli@polimi.it; Seregni, Matteo; Fattori, Giovanni

    Purpose: This study applied automatic feature detection on cine–magnetic resonance imaging (MRI) liver images in order to provide a prospective comparison between MRI-guided and surrogate-based tracking methods for motion-compensated liver radiation therapy. Methods and Materials: In a population of 30 subjects (5 volunteers plus 25 patients), 2 oblique sagittal slices were acquired across the liver at high temporal resolution. An algorithm based on scale invariant feature transform (SIFT) was used to extract and track multiple features throughout the image sequence. The position of abdominal markers was also measured directly from the image series, and the internal motion of each featuremore » was quantified through multiparametric analysis. Surrogate-based tumor tracking with a state-of-the-art external/internal correlation model was simulated. The geometrical tracking error was measured, and its correlation with external motion parameters was also investigated. Finally, the potential gain in tracking accuracy relying on MRI guidance was quantified as a function of the maximum allowed tracking error. Results: An average of 45 features was extracted for each subject across the whole liver. The multi-parametric motion analysis reported relevant inter- and intrasubject variability, highlighting the value of patient-specific and spatially-distributed measurements. Surrogate-based tracking errors (relative to the motion amplitude) were were in the range 7% to 23% (1.02-3.57mm) and were significantly influenced by external motion parameters. The gain of MRI guidance compared to surrogate-based motion tracking was larger than 30% in 50% of the subjects when considering a 1.5-mm tracking error tolerance. Conclusions: Automatic feature detection applied to cine-MRI allows detailed liver motion description to be obtained. Such information was used to quantify the performance of surrogate-based tracking methods and to provide a prospective comparison with respect to MRI-guided radiation therapy, which could support the definition of patient-specific optimal treatment strategies.« less

  20. A Motion Tracking and Sensor Fusion Module for Medical Simulation.

    PubMed

    Shen, Yunhe; Wu, Fan; Tseng, Kuo-Shih; Ye, Ding; Raymond, John; Konety, Badrinath; Sweet, Robert

    2016-01-01

    Here we introduce a motion tracking or navigation module for medical simulation systems. Our main contribution is a sensor fusion method for proximity or distance sensors integrated with inertial measurement unit (IMU). Since IMU rotation tracking has been widely studied, we focus on the position or trajectory tracking of the instrument moving freely within a given boundary. In our experiments, we have found that this module reliably tracks instrument motion.

  1. Automatic respiration tracking for radiotherapy using optical 3D camera

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New images can be accurately represented as weighted summation of those eigen-vectors, which can be easily discriminated with a trained classifier. We developed algorithms, software and integrated with an O3D imaging system to perform the respiration tracking automatically. The resulting respiration tracking system requires no human intervene during it tracking operation. Experimental results show that our approach to respiration tracking is more accurate and robust than the methods using manual selected markers, even in the presence of incomplete imaging data.

  2. High-resolution motion-compensated imaging photoplethysmography for remote heart rate monitoring

    NASA Astrophysics Data System (ADS)

    Chung, Audrey; Wang, Xiao Yu; Amelard, Robert; Scharfenberger, Christian; Leong, Joanne; Kulinski, Jan; Wong, Alexander; Clausi, David A.

    2015-03-01

    We present a novel non-contact photoplethysmographic (PPG) imaging system based on high-resolution video recordings of ambient reflectance of human bodies that compensates for body motion and takes advantage of skin erythema fluctuations to improve measurement reliability for the purpose of remote heart rate monitoring. A single measurement location for recording the ambient reflectance is automatically identified on an individual, and the motion for the location is determined over time via measurement location tracking. Based on the determined motion information motion-compensated reflectance measurements at different wavelengths for the measurement location can be acquired, thus providing more reliable measurements for the same location on the human over time. The reflectance measurement is used to determine skin erythema fluctuations over time, resulting in the capture of a PPG signal with a high signal-to-noise ratio. To test the efficacy of the proposed system, a set of experiments involving human motion in a front-facing position were performed under natural ambient light. The experimental results demonstrated that skin erythema fluctuations can achieve noticeably improved average accuracy in heart rate measurement when compared to previously proposed non-contact PPG imaging systems.

  3. Markerless rat head motion tracking using structured light for brain PET imaging of unrestrained awake small animals

    NASA Astrophysics Data System (ADS)

    Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen

    2017-03-01

    Preclinical positron emission tomography (PET) imaging in small animals is generally performed under anesthesia to immobilize the animal during scanning. More recently, for rat brain PET studies, methods to perform scans of unrestrained awake rats are being developed in order to avoid the unwanted effects of anesthesia on the brain response. Here, we investigate the use of a projected structure stereo camera to track the motion of the rat head during the PET scan. The motion information is then used to correct the PET data. The stereo camera calculates a 3D point cloud representation of the scene and the tracking is performed by point cloud matching using the iterative closest point algorithm. The main advantage of the proposed motion tracking is that no intervention, e.g. for marker attachment, is needed. A manually moved microDerenzo phantom experiment and 3 awake rat [18F]FDG experiments were performed to evaluate the proposed tracking method. The tracking accuracy was 0.33 mm rms. After motion correction image reconstruction, the microDerenzo phantom was recovered albeit with some loss of resolution. The reconstructed FWHM of the 2.5 and 3 mm rods increased with 0.94 and 0.51 mm respectively in comparison with the motion-free case. In the rat experiments, the average tracking success rate was 64.7%. The correlation of relative brain regional [18F]FDG uptake between the anesthesia and awake scan reconstructions was increased from on average 0.291 (not significant) before correction to 0.909 (p  <  0.0001) after motion correction. Markerless motion tracking using structured light can be successfully used for tracking of the rat head for motion correction in awake rat PET scans.

  4. Development of a real-time internal and external marker tracking system for particle therapy: a phantom study using patient tumor trajectory data.

    PubMed

    Cho, Junsang; Cheon, Wonjoong; Ahn, Sanghee; Jung, Hyunuk; Sheen, Heesoon; Park, Hee Chul; Han, Youngyih

    2017-09-01

    Target motion-induced uncertainty in particle therapy is more complicated than that in X-ray therapy, requiring more accurate motion management. Therefore, a hybrid motion-tracking system that can track internal tumor motion and as well as an external surrogate of tumor motion was developed. Recently, many correlation tests between internal and external markers in X-ray therapy have been developed; however, the accuracy of such internal/external marker tracking systems, especially in particle therapy, has not yet been sufficiently tested. In this article, the process of installing an in-house hybrid internal/external motion-tracking system is described and the accuracy level of tracking system was acquired. Our results demonstrated that the developed in-house external/internal combined tracking system has submillimeter accuracy, and can be clinically used as a particle therapy system as well as a simulation system for moving tumor treatment. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  5. Management of three-dimensional intrafraction motion through real-time DMLC tracking.

    PubMed

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-05-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion.

  6. SU-G-BRA-17: Tracking Multiple Targets with Independent Motion in Real-Time Using a Multi-Leaf Collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Y; Keall, P; Poulsen, P

    Purpose: Multiple targets with large intrafraction independent motion are often involved in advanced prostate, lung, abdominal, and head and neck cancer radiotherapy. Current standard of care treats these with the originally planned fields, jeopardizing the treatment outcomes. A real-time multi-leaf collimator (MLC) tracking method has been developed to address this problem for the first time. This study evaluates the geometric uncertainty of the multi-target tracking method. Methods: Four treatment scenarios are simulated based on a prostate IMAT plan to treat a moving prostate target and static pelvic node target: 1) real-time multi-target MLC tracking; 2) real-time prostate-only MLC tracking; 3)more » correcting for prostate interfraction motion at setup only; and 4) no motion correction. The geometric uncertainty of the treatment is assessed by the sum of the erroneously underexposed target area and overexposed healthy tissue areas for each individual target. Two patient-measured prostate trajectories of average 2 and 5 mm motion magnitude are used for simulations. Results: Real-time multi-target tracking accumulates the least uncertainty overall. As expected, it covers the static nodes similarly well as no motion correction treatment and covers the moving prostate similarly well as the real-time prostate-only tracking. Multi-target tracking reduces >90% of uncertainty for the static nodal target compared to the real-time prostate-only tracking or interfraction motion correction. For prostate target, depending on the motion trajectory which affects the uncertainty due to leaf-fitting, multi-target tracking may or may not perform better than correcting for interfraction prostate motion by shifting patient at setup, but it reduces ∼50% of uncertainty compared to no motion correction. Conclusion: The developed real-time multi-target MLC tracking can adapt for the independently moving targets better than other available treatment adaptations. This will enable PTV margin reduction to minimize health tissue toxicity while remain tumor coverage when treating advanced disease with independently moving targets involved. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship and NHMRC Project Grant No. APP1042375.« less

  7. 4D ultrasound speckle tracking of intra-fraction prostate motion: a phantom-based comparison with x-ray fiducial tracking using CyberKnife

    NASA Astrophysics Data System (ADS)

    O'Shea, Tuathan P.; Garcia, Leo J.; Rosser, Karen E.; Harris, Emma J.; Evans, Philip M.; Bamber, Jeffrey C.

    2014-04-01

    This study investigates the use of a mechanically-swept 3D ultrasound (3D-US) probe for soft-tissue displacement monitoring during prostate irradiation, with emphasis on quantifying the accuracy relative to CyberKnife® x-ray fiducial tracking. An US phantom, implanted with x-ray fiducial markers was placed on a motion platform and translated in 3D using five real prostate motion traces acquired using the Calypso system. Motion traces were representative of all types of motion as classified by studying Calypso data for 22 patients. The phantom was imaged using a 3D swept linear-array probe (to mimic trans-perineal imaging) and, subsequently, the kV x-ray imaging system on CyberKnife. A 3D cross-correlation block-matching algorithm was used to track speckle in the ultrasound data. Fiducial and US data were each compared with known phantom displacement. Trans-perineal 3D-US imaging could track superior-inferior (SI) and anterior-posterior (AP) motion to ≤0.81 mm root-mean-square error (RMSE) at a 1.7 Hz volume rate. The maximum kV x-ray tracking RMSE was 0.74 mm, however the prostate motion was sampled at a significantly lower imaging rate (mean: 0.04 Hz). Initial elevational (right-left RL) US displacement estimates showed reduced accuracy but could be improved (RMSE <2.0 mm) using a correlation threshold in the ultrasound tracking code to remove erroneous inter-volume displacement estimates. Mechanically-swept 3D-US can track the major components of intra-fraction prostate motion accurately but exhibits some limitations. The largest US RMSE was for elevational (RL) motion. For the AP and SI axes, accuracy was sub-millimetre. It may be feasible to track prostate motion in 2D only. 3D-US also has the potential to improve high tracking accuracy for all motion types. It would be advisable to use US in conjunction with a small (˜2.0 mm) centre-of-mass displacement threshold in which case it would be possible to take full advantage of the accuracy and high imaging rate capability.

  8. CAT & MAUS: A novel system for true dynamic motion measurement of underlying bony structures with compensation for soft tissue movement.

    PubMed

    Jia, Rui; Monk, Paul; Murray, David; Noble, J Alison; Mellon, Stephen

    2017-09-06

    Optoelectronic motion capture systems are widely employed to measure the movement of human joints. However, there can be a significant discrepancy between the data obtained by a motion capture system (MCS) and the actual movement of underlying bony structures, which is attributed to soft tissue artefact. In this paper, a computer-aided tracking and motion analysis with ultrasound (CAT & MAUS) system with an augmented globally optimal registration algorithm is presented to dynamically track the underlying bony structure during movement. The augmented registration part of CAT & MAUS was validated with a high system accuracy of 80%. The Euclidean distance between the marker-based bony landmark and the bony landmark tracked by CAT & MAUS was calculated to quantify the measurement error of an MCS caused by soft tissue artefact during movement. The average Euclidean distance between the target bony landmark measured by each of the CAT & MAUS system and the MCS alone varied from 8.32mm to 16.87mm in gait. This indicates the discrepancy between the MCS measured bony landmark and the actual underlying bony landmark. Moreover, Procrustes analysis was applied to demonstrate that CAT & MAUS reduces the deformation of the body segment shape modeled by markers during motion. The augmented CAT & MAUS system shows its potential to dynamically detect and locate actual underlying bony landmarks, which reduces the MCS measurement error caused by soft tissue artefact during movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Active eye-tracking for an adaptive optics scanning laser ophthalmoscope

    PubMed Central

    Sheehy, Christy K.; Tiruveedhula, Pavan; Sabesan, Ramkumar; Roorda, Austin

    2015-01-01

    We demonstrate a system that combines a tracking scanning laser ophthalmoscope (TSLO) and an adaptive optics scanning laser ophthalmoscope (AOSLO) system resulting in both optical (hardware) and digital (software) eye-tracking capabilities. The hybrid system employs the TSLO for active eye-tracking at a rate up to 960 Hz for real-time stabilization of the AOSLO system. AOSLO videos with active eye-tracking signals showed, at most, an amplitude of motion of 0.20 arcminutes for horizontal motion and 0.14 arcminutes for vertical motion. Subsequent real-time digital stabilization limited residual motion to an average of only 0.06 arcminutes (a 95% reduction). By correcting for high amplitude, low frequency drifts of the eye, the active TSLO eye-tracking system enabled the AOSLO system to capture high-resolution retinal images over a larger range of motion than previously possible with just the AOSLO imaging system alone. PMID:26203370

  10. Development and evaluation of a prototype tracking system using the treatment couch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Stephanie, E-mail: stephanie.lang@usz.ch; Riesterer, Oliver; Klöck, Stephan

    2014-02-15

    Purpose: Tumor motion increases safety margins around the clinical target volume and leads to an increased dose to the surrounding healthy tissue. The authors have developed and evaluated a one-dimensional treatment couch tracking system to counter steer respiratory tumor motion. Three different motion detection sensors with different lag times were evaluated. Methods: The couch tracking system consists of a motion detection sensor, which can be the topometrical system Topos (Cyber Technologies, Germany), the respiratory gating system RPM (Varian Medical Systems) or a laser triangulation system (Micro Epsilon), and the Protura treatment couch (Civco Medical Systems). The control of the treatmentmore » couch was implemented in the block diagram environment Simulink (MathWorks). To achieve real time performance, the Simulink models were executed on a real time engine, provided by Real-Time Windows Target (MathWorks). A proportional-integral control system was implemented. The lag time of the couch tracking system using the three different motion detection sensors was measured. The geometrical accuracy of the system was evaluated by measuring the mean absolute deviation from the reference (static position) during motion tracking. This deviation was compared to the mean absolute deviation without tracking and a reduction factor was defined. A hexapod system was moving according to seven respiration patterns previously acquired with the RPM system as well as according to a sin{sup 6} function with two different frequencies (0.33 and 0.17 Hz) and the treatment table compensated the motion. Results: A prototype system for treatment couch tracking of respiratory motion was developed. The laser based tracking system with a small lag time of 57 ms reduced the residual motion by a factor of 11.9 ± 5.5 (mean value ± standard deviation). An increase in delay time from 57 to 130 ms (RPM based system) resulted in a reduction by a factor of 4.7 ± 2.6. The Topos based tracking system with the largest lag time of 300 ms achieved a mean reduction by a factor of 3.4 ± 2.3. The increase in the penumbra of a profile (1 × 1 cm{sup 2}) for a motion of 6 mm was 1.4 mm. With tracking applied there was no increase in the penumbra. Conclusions: Couch tracking with the Protura treatment couch is achievable. To reliably track all possible respiration patterns without prediction filters a short lag time below 100 ms is needed. More scientific work is necessary to extend our prototype to tracking of internal motion.« less

  11. MR-assisted PET Motion Correction for eurological Studies in an Integrated MR-PET Scanner

    PubMed Central

    Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B.; Michel, Christian J.; El Fakhri, Georges; Schmand, Matthias; Sorensen, A. Gregory

    2011-01-01

    Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MR data can be used for motion tracking. In this work, a novel data processing and rigid-body motion correction (MC) algorithm for the MR-compatible BrainPET prototype scanner is described and proof-of-principle phantom and human studies are presented. Methods To account for motion, the PET prompts and randoms coincidences as well as the sensitivity data are processed in the line or response (LOR) space according to the MR-derived motion estimates. After sinogram space rebinning, the corrected data are summed and the motion corrected PET volume is reconstructed from these sinograms and the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed and motion estimates were obtained using two high temporal resolution MR-based motion tracking techniques. Results After accounting for the physical mismatch between the two scanners, perfectly co-registered MR and PET volumes are reproducibly obtained. The MR output gates inserted in to the PET list-mode allow the temporal correlation of the two data sets within 0.2 s. The Hoffman phantom volume reconstructed processing the PET data in the LOR space was similar to the one obtained processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the novel MC algorithm. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 seconds and 20 ms, respectively. Substantially improved PET images with excellent delineation of specific brain structures were obtained after applying the MC using these MR-based estimates. Conclusion A novel MR-based MC algorithm was developed for the integrated MR-PET scanner. High temporal resolution MR-derived motion estimates (obtained while simultaneously acquiring anatomical or functional MR data) can be used for PET MC. An MR-based MC has the potential to improve PET as a quantitative method, increasing its reliability and reproducibility which could benefit a large number of neurological applications. PMID:21189415

  12. Development of a real-time internal and external marker tracking system for particle therapy: a phantom study using patient tumor trajectory data

    PubMed Central

    Cho, Junsang; Cheon, Wonjoong; Ahn, Sanghee; Jung, Hyunuk; Sheen, Heesoon; Park, Hee Chul

    2017-01-01

    Abstract Target motion–induced uncertainty in particle therapy is more complicated than that in X-ray therapy, requiring more accurate motion management. Therefore, a hybrid motion-tracking system that can track internal tumor motion and as well as an external surrogate of tumor motion was developed. Recently, many correlation tests between internal and external markers in X-ray therapy have been developed; however, the accuracy of such internal/external marker tracking systems, especially in particle therapy, has not yet been sufficiently tested. In this article, the process of installing an in-house hybrid internal/external motion-tracking system is described and the accuracy level of tracking system was acquired. Our results demonstrated that the developed in-house external/internal combined tracking system has submillimeter accuracy, and can be clinically used as a particle therapy system as well as a simulation system for moving tumor treatment. PMID:28201522

  13. A Freehand Ultrasound Elastography System with Tracking for In-vivo Applications

    PubMed Central

    Foroughi, Pezhman; Kang, Hyun-Jae; Carnegie, Daniel A.; van Vledder, Mark G.; Choti, Michael A.; Hager, Gregory D.; Boctor, Emad M.

    2012-01-01

    Ultrasound transducers are commonly tracked in modern ultrasound navigation/guidance systems. In this paper, we demonstrate the advantages of incorporating tracking information into ultrasound elastography for clinical applications. First, we address a common limitation of freehand palpation: speckle decorrelation due to out-of-plane probe motion. We show that by automatically selecting pairs of radio frequency (RF) frames with minimal lateral and out-of-plane motions combined with a fast and robust displacement estimation technique greatly improves in-vivo elastography results. We also use tracking information and image quality measure to fuse multiple images with similar strain that are taken roughly from the same location to obtain a high quality elastography image. Finally, we show that tracking information can be used to give the user partial control over the rate of compression. Our methods are tested on tissue mimicking phantom and experiments have been conducted on intra-operative data acquired during animal and human experiments involving liver ablation. Our results suggest that in challenging clinical conditions, our proposed method produces reliable strain images and eliminates the need for a manual search through the ultrasound data in order to find RF pairs suitable for elastography. PMID:23257351

  14. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    PubMed Central

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-01-01

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs. PMID:24919013

  15. Robotics-based synthesis of human motion.

    PubMed

    Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S

    2009-01-01

    The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.

  16. Development of a Sunspot Tracking System

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime R.

    1998-01-01

    Large solar flares produce a significant amount of energetic particles which pose a hazard for human activity in space. In the hope of understanding flare mechanisms and thus better predicting solar flares, NASA's Marshall Space Flight Center (MSFC) developed an experimental vector magnetograph (EXVM) polarimeter to measure the Sun's magnetic field. The EXVM will be used to perform ground-based solar observations and will provide a proof of concept for the design of a similar instrument for the Japanese Solar-B space mission. The EXVM typically operates for a period of several minutes. During this time there is image motion due to atmospheric fluctuation and telescope wind loading. To optimize the EXVM performance an image motion compensation device (sunspot tracker) is needed. The sunspot tracker consists of two parts, an image motion determination system and an image deflection system. For image motion determination a CCD or CID camera is used to digitize an image, than an algorithm is applied to determine the motion. This motion or error signal is sent to the image deflection system which moves the image back to its original location. Both of these systems are under development. Two algorithms are available for sunspot tracking which require the use of only one row and one column of image data. To implement these algorithms, two identical independent systems are being developed, one system for each axis of motion. Two CID cameras have been purchased; the data from each camera will be used to determine image motion for each direction. The error signal generated by the tracking algorithm will be sent to an image deflection system consisting of an actuator and a mirror constrained to move about one axis. Magnetostrictive actuators were chosen to move the mirror over piezoelectrics due to their larger driving force and larger range of motion. The actuator and mirror mounts are currently under development.

  17. A complete system for head tracking using motion-based particle filter and randomly perturbed active contour

    NASA Astrophysics Data System (ADS)

    Bouaynaya, N.; Schonfeld, Dan

    2005-03-01

    Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.

  18. Gold Standard Testing of Motion Based Tracking Systems

    DTIC Science & Technology

    2017-03-15

    NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER H0L0 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER 9...LABORATORY 711TH HUMAN PERFORMANCE WING, AIRMAN SYSTEMS DIRECTORATE, WRIGHT-PATTERSON AIR FORCE BASE, OH 45433 AIR FORCE MATERIEL COMMAND UNITED STATES AIR...711th Human Performance Wing 711th Human Performance Wing Air Force Research Laboratory Air Force Research Laboratory This report is published in the

  19. Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments

    PubMed Central

    Sun, Tongyang; Duan, Lihong; Wang, Yulong

    2017-01-01

    The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575

  20. Self-Motion Impairs Multiple-Object Tracking

    ERIC Educational Resources Information Center

    Thomas, Laura E.; Seiffert, Adriane E.

    2010-01-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement…

  1. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  2. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  3. Orbit-attitude coupled motion around small bodies: Sun-synchronous orbits with Sun-tracking attitude motion

    NASA Astrophysics Data System (ADS)

    Kikuchi, Shota; Howell, Kathleen C.; Tsuda, Yuichi; Kawaguchi, Jun'ichiro

    2017-11-01

    The motion of a spacecraft in proximity to a small body is significantly perturbed due to its irregular gravity field and solar radiation pressure. In such a strongly perturbed environment, the coupling effect of the orbital and attitude motions exerts a large influence that cannot be neglected. However, natural orbit-attitude coupled dynamics around small bodies that are stationary in both orbital and attitude motions have yet to be observed. The present study therefore investigates natural coupled motion that involves both a Sun-synchronous orbit and Sun-tracking attitude motion. This orbit-attitude coupled motion enables a spacecraft to maintain its orbital geometry and attitude state with respect to the Sun without requiring active control. Therefore, the proposed method can reduce the use of an orbit and attitude control system. This paper first presents analytical conditions to achieve Sun-synchronous orbits and Sun-tracking attitude motion. These analytical solutions are then numerically propagated based on non-linear coupled orbit-attitude equations of motion. Consequently, the possibility of implementing Sun-synchronous orbits with Sun-tracking attitude motion is demonstrated.

  4. Evaluation of a video-based head motion tracking system for dedicated brain PET

    NASA Astrophysics Data System (ADS)

    Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.

    2015-03-01

    Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.

  5. Multileaf collimator tracking integrated with a novel x-ray imaging system and external surrogate monitoring

    NASA Astrophysics Data System (ADS)

    Krauss, Andreas; Fast, Martin F.; Nill, Simeon; Oelfke, Uwe

    2012-04-01

    We have previously developed a tumour tracking system, which adapts the aperture of a Siemens 160 MLC to electromagnetically monitored target motion. In this study, we exploit the use of a novel linac-mounted kilovoltage x-ray imaging system for MLC tracking. The unique in-line geometry of the imaging system allows the detection of target motion perpendicular to the treatment beam (i.e. the directions usually featuring steep dose gradients). We utilized the imaging system either alone or in combination with an external surrogate monitoring system. We equipped a Siemens ARTISTE linac with two flat panel detectors, one directly underneath the linac head for motion monitoring and the other underneath the patient couch for geometric tracking accuracy assessments. A programmable phantom with an embedded metal marker reproduced three patient breathing traces. For MLC tracking based on x-ray imaging alone, marker position was detected at a frame rate of 7.1 Hz. For the combined external and internal motion monitoring system, a total of only 85 x-ray images were acquired prior to or in between the delivery of ten segments of an IMRT beam. External motion was monitored with a potentiometer. A correlation model between external and internal motion was established. The real-time component of the MLC tracking procedure then relied solely on the correlation model estimations of internal motion based on the external signal. Geometric tracking accuracies were 0.6 mm (1.1 mm) and 1.8 mm (1.6 mm) in directions perpendicular and parallel to the leaf travel direction for the x-ray-only (the combined external and internal) motion monitoring system in spite of a total system latency of ˜0.62 s (˜0.51 s). Dosimetric accuracy for a highly modulated IMRT beam-assessed through radiographic film dosimetry-improved substantially when tracking was applied, but depended strongly on the respective geometric tracking accuracy. In conclusion, we have for the first time integrated MLC tracking with x-ray imaging in the in-line geometry and demonstrated highly accurate respiratory motion tracking.

  6. Ego-Motion and Tracking for Continuous Object Learning: A Brief Survey

    DTIC Science & Technology

    2017-09-01

    ARL-TR-8167• SEP 2017 US Army Research Laboratory Ego-motion and Tracking for ContinuousObject Learning: A Brief Survey by Jason Owens and Philip...SEP 2017 US Army Research Laboratory Ego-motion and Tracking for ContinuousObject Learning: A Brief Survey by Jason Owens and Philip OsteenVehicle...

  7. Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.

    PubMed

    Quesada, Luis; León, Alejandro J

    2012-10-01

    Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.

  8. Motion-based prediction explains the role of tracking in motion extrapolation.

    PubMed

    Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U

    2013-11-01

    During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated. Then, during tracking, trajectory estimation is robust to blanks even in the presence of relatively high levels of noise. Moreover, we found that tracking is necessary for motion extrapolation, this calls for further experimental work exploring the role of noise in motion extrapolation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Scanning mid-IR laser apparatus with eye tracking for refractive surgery

    NASA Astrophysics Data System (ADS)

    Telfair, William B.; Yoder, Paul R., Jr.; Bekker, Carsten; Hoffman, Hanna J.; Jensen, Eric F.

    1999-06-01

    A robust, real-time, dynamic eye tracker has been integrated with the short pulse mid-infrared laser scanning delivery system previously described. This system employs a Q- switched Nd:YAG laser pumped optical parametric oscillator operating at 2.94 micrometers. Previous ablation studies on human cadaver eyes and in-vivo cat eyes demonstrated very smooth ablations with extremely low damage levels similar to results with an excimer. A 4-month healing study with cats indicated no adverse healing effects. In order to treat human eyes, the tracker is required because the eyes move during the procedure due to both voluntary and involuntary motions such as breathing, heartbeat, drift, loss of fixation, saccades and microsaccades. Eye tracking techniques from the literature were compared. A limbus tracking system was best for this application. Temporal and spectral filtering techniques were implemented to reduce tracking errors, reject stray light, and increase signal to noise ratio. The expanded-capability system (IRVision AccuScan 2000 Laser System) has been tested in the lab on simulated eye targets, glass eyes, cadaver eyes, and live human subjects. Circular targets ranging from 10-mm to 14-mm diameter were successfully tracked. The tracker performed beyond expectations while the system performed myopic photorefractive keratectomy procedures on several legally blind human subjects.

  10. DMLC tracking and gating can improve dose coverage for prostate VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colvill, E.; Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, NSW 2065; School of Physics, University of Sydney, NSW 2006

    2014-09-15

    Purpose: To assess and compare the dosimetric impact of dynamic multileaf collimator (DMLC) tracking and gating as motion correction strategies to account for intrafraction motion during conventionally fractionated prostate radiotherapy. Methods: A dose reconstruction method was used to retrospectively assess the dose distributions delivered without motion correction during volumetric modulated arc therapy fractions for 20 fractions of five prostate cancer patients who received conventionally fractionated radiotherapy. These delivered dose distributions were compared with the dose distributions which would have been delivered had DMLC tracking or gating motion correction strategies been implemented. The delivered dose distributions were constructed by incorporating themore » observed prostate motion with the patient's original treatment plan to simulate the treatment delivery. The DMLC tracking dose distributions were constructed using the same dose reconstruction method with the addition of MLC positions from Linac log files obtained during DMLC tracking simulations with the observed prostate motions input to the DMLC tracking software. The gating dose distributions were constructed by altering the prostate motion to simulate the application of a gating threshold of 3 mm for 5 s. Results: The delivered dose distributions showed that dosimetric effects of intrafraction prostate motion could be substantial for some fractions, with an estimated dose decrease of more than 19% and 34% from the planned CTVD{sub 99%} and PTV D{sub 95%} values, respectively, for one fraction. Evaluation of dose distributions for DMLC tracking and gating deliveries showed that both interventions were effective in improving the CTV D{sub 99%} for all of the selected fractions to within 4% of planned value for all fractions. For the delivered dose distributions the difference in rectum V{sub 65%} for the individual fractions from planned ranged from −44% to 101% and for the bladder V{sub 65%} the range was −61% to 26% from planned. The application of tracking decreased the maximum rectum and bladder V{sub 65%} difference to 6% and 4%, respectively. Conclusions: For the first time, the dosimetric impact of DMLC tracking and gating to account for intrafraction motion during prostate radiotherapy has been assessed and compared with no motion correction. Without motion correction intrafraction prostate motion can result in a significant decrease in target dose coverage for a small number of individual fractions. This is unlikely to effect the overall treatment for most patients undergoing conventionally fractionated treatments. Both DMLC tracking and gating demonstrate dose distributions for all assessed fractions that are robust to intrafraction motion.« less

  11. Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Helferty, James P.; Padfield, Dirk R.

    2003-05-01

    Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.

  12. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  13. SU-E-J-118: Verification of Intrafractional Positional Accuracy Using Ultrasound Autoscan Tracking for Prostate Cancer Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, S; Hristov, D; Phillips, T

    Purpose: Transperineal ultrasound imaging is attractive option for imageguided radiation therapy as there is no need to implant fiducials, no extra imaging dose, and real time continuous imaging is possible during treatment. The aim of this study is to verify the tracking accuracy of a commercial ultrasound system under treatment conditions with a male pelvic phantom. Methods: A CT and ultrasound scan were acquired for the male pelvic phantom. The phantom was then placed in a treatment mimicking position on a motion platform. The axial and lateral tracking accuracy of the ultrasound system were verified using an independent optical trackingmore » system. The tracking accuracy was evaluated by tracking the phantom position detected by the ultrasound system, and comparing it to the optical tracking system under the conditions of beam on (15 MV), beam off, poor image quality with an acoustic shadow introduced, and different phantom motion cycles (10 and 20 second periods). Additionally, the time lag between the ultrasound-detected and actual phantom motion was investigated. Results: Displacement amplitudes reported by the ultrasound system and optical system were within 0.5 mm of each other for both directions and all conditions. The ultrasound tracking performance in axial direction was better than in lateral direction. Radiation did not interfere with ultrasound tracking while image quality affected tracking accuracy. The tracking accuracy was better for periodic motion with 20 second period. The time delay between the ultrasound tracking system and the phantom motion was clinically acceptable. Conclusion: Intrafractional prostate motion is a potential source of treatment error especially in the context of emerging SBRT regimens. It is feasible to use transperineal ultrasound daily to monitor prostate motion during treatment. Our results verify the tracking accuracy of a commercial ultrasound system to be better than 1 mm under typical external beam treatment conditions.« less

  14. Covert enaction at work: Recording the continuous movements of visuospatial attention to visible or imagined targets by means of Steady-State Visual Evoked Potentials (SSVEPs).

    PubMed

    Gregori Grgič, Regina; Calore, Enrico; de'Sperati, Claudio

    2016-01-01

    Whereas overt visuospatial attention is customarily measured with eye tracking, covert attention is assessed by various methods. Here we exploited Steady-State Visual Evoked Potentials (SSVEPs) - the oscillatory responses of the visual cortex to incoming flickering stimuli - to record the movements of covert visuospatial attention in a way operatively similar to eye tracking (attention tracking), which allowed us to compare motion observation and motion extrapolation with and without eye movements. Observers fixated a central dot and covertly tracked a target oscillating horizontally and sinusoidally. In the background, the left and the right halves of the screen flickered at two different frequencies, generating two SSVEPs in occipital regions whose size varied reciprocally as observers attended to the moving target. The two signals were combined into a single quantity that was modulated at the target frequency in a quasi-sinusoidal way, often clearly visible in single trials. The modulation continued almost unchanged when the target was switched off and observers mentally extrapolated its motion in imagery, and also when observers pointed their finger at the moving target during covert tracking, or imagined doing so. The amplitude of modulation during covert tracking was ∼25-30% of that measured when observers followed the target with their eyes. We used 4 electrodes in parieto-occipital areas, but similar results were achieved with a single electrode in Oz. In a second experiment we tested ramp and step motion. During overt tracking, SSVEPs were remarkably accurate, showing both saccadic-like and smooth pursuit-like modulations of cortical responsiveness, although during covert tracking the modulation deteriorated. Covert tracking was better with sinusoidal motion than ramp motion, and better with moving targets than stationary ones. The clear modulation of cortical responsiveness recorded during both overt and covert tracking, identical for motion observation and motion extrapolation, suggests to include covert attention movements in enactive theories of mental imagery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. A Method for Testing the Dynamic Accuracy of Micro-Electro-Mechanical Systems (MEMS) Magnetic, Angular Rate, and Gravity (MARG) Sensors for Inertial Navigation Systems (INS) and Human Motion Tracking Applications

    DTIC Science & Technology

    2010-06-01

    32 2. Low-Cost Framework........................................................................33 3. Low Magnetic Field ...that have a significant impact on the magnetic field measured by a MARG, which could potentially add errors that are due entirely to the test...minimize the impact on the local magnetic field , and the apparatus was made as rigidly as possible using 2 x 4s to minimize any out of plane motions that

  16. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  17. Electromagnetic tracking of motion in the proximity of computer generated graphical stimuli: a tutorial.

    PubMed

    Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael

    2013-09-01

    Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.

  18. Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis

    PubMed Central

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-01-01

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. PMID:25350502

  19. Laser spot tracking based on modified circular Hough transform and motion pattern analysis.

    PubMed

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-10-27

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas-Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development.

  20. Directional asymmetries in human smooth pursuit eye movements.

    PubMed

    Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam

    2013-06-27

    Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.

  1. Stretch sensors for human body motion

    NASA Astrophysics Data System (ADS)

    O'Brien, Ben; Gisby, Todd; Anderson, Iain A.

    2014-03-01

    Sensing motion of the human body is a difficult task. From an engineers' perspective people are soft highly mobile objects that move in and out of complex environments. As well as the technical challenge of sensing, concepts such as comfort, social intrusion, usability, and aesthetics are paramount in determining whether someone will adopt a sensing solution or not. At the same time the demands for human body motion sensing are growing fast. Athletes want feedback on posture and technique, consumers need new ways to interact with augmented reality devices, and healthcare providers wish to track recovery of a patient. Dielectric elastomer stretch sensors are ideal for bridging this gap. They are soft, flexible, and precise. They are low power, lightweight, and can be easily mounted on the body or embedded into clothing. From a commercialisation point of view stretch sensing is easier than actuation or generation - such sensors can be low voltage and integrated with conventional microelectronics. This paper takes a birds-eye view of the use of these sensors to measure human body motion. A holistic description of sensor operation and guidelines for sensor design will be presented to help technologists and developers in the space.

  2. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, S; Rottmann, J; Berbeco, R

    2014-06-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less

  3. LabVIEW application for motion tracking using USB camera

    NASA Astrophysics Data System (ADS)

    Rob, R.; Tirian, G. O.; Panoiu, M.

    2017-05-01

    The technical state of the contact line and also the additional equipment in electric rail transport is very important for realizing the repairing and maintenance of the contact line. During its functioning, the pantograph motion must stay in standard limits. Present paper proposes a LabVIEW application which is able to track in real time the motion of a laboratory pantograph and also to acquire the tracking images. An USB webcam connected to a computer acquires the desired images. The laboratory pantograph contains an automatic system which simulates the real motion. The tracking parameters are the horizontally motion (zigzag) and the vertically motion which can be studied in separate diagrams. The LabVIEW application requires appropriate tool-kits for vision development. Therefore the paper describes the subroutines that are especially programmed for real-time image acquisition and also for data processing.

  4. Effects of motion base and g-seat cueing of simulator pilot performance

    NASA Technical Reports Server (NTRS)

    Ashworth, B. R.; Mckissick, B. T.; Parrish, R. V.

    1984-01-01

    In order to measure and analyze the effects of a motion plus g-seat cueing system, a manned-flight-simulation experiment was conducted utilizing a pursuit tracking task and an F-16 simulation model in the NASA Langley visual/motion simulator. This experiment provided the information necessary to determine whether motion and g-seat cues have an additive effect on the performance of this task. With respect to the lateral tracking error and roll-control stick force, the answer is affirmative. It is shown that presenting the two cues simultaneously caused significant reductions in lateral tracking error and that using the g-seat and motion base separately provided essentially equal reductions in the pilot's lateral tracking error.

  5. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  6. A rigid motion correction method for helical computed tomography (CT)

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Nuyts, J.; Kyme, A.; Kuncic, Z.; Fulton, R.

    2015-03-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data.

  7. Tracking colliding cells in vivo microscopy.

    PubMed

    Nguyen, Nhat H; Keller, Steven; Norris, Eric; Huynh, Toan T; Clemens, Mark G; Shin, Min C

    2011-08-01

    Leukocyte motion represents an important component in the innate immune response to infection. Intravital microscopy is a powerful tool as it enables in vivo imaging of leukocyte motion. Under inflammatory conditions, leukocytes may exhibit various motion behaviors, such as flowing, rolling, and adhering. With many leukocytes moving at a wide range of speeds, collisions occur. These collisions result in abrupt changes in the motion and appearance of leukocytes. Manual analysis is tedious, error prone,time consuming, and could introduce technician-related bias. Automatic tracking is also challenging due to the noise inherent in in vivo images and abrupt changes in motion and appearance due to collision. This paper presents a method to automatically track multiple cells undergoing collisions by modeling the appearance and motion for each collision state and testing collision hypotheses of possible transitions between states. The tracking results are demonstrated using in vivo intravital microscopy image sequences.We demonstrate that 1)71% of colliding cells are correctly tracked; (2) the improvement of the proposed method is enhanced when the duration of collision increases; and (3) given good detection results, the proposed method can correctly track 88% of colliding cells. The method minimizes the tracking failures under collisions and, therefore, allows more robust analysis in the study of leukocyte behaviors responding to inflammatory conditions.

  8. Quantifying the degree of persistence in random amoeboid motion based on the Hurst exponent of fractional Brownian motion.

    PubMed

    Makarava, Natallia; Menz, Stephan; Theves, Matthias; Huisinga, Wilhelm; Beta, Carsten; Holschneider, Matthias

    2014-10-01

    Amoebae explore their environment in a random way, unless external cues like, e.g., nutrients, bias their motion. Even in the absence of cues, however, experimental cell tracks show some degree of persistence. In this paper, we analyzed individual cell tracks in the framework of a linear mixed effects model, where each track is modeled by a fractional Brownian motion, i.e., a Gaussian process exhibiting a long-term correlation structure superposed on a linear trend. The degree of persistence was quantified by the Hurst exponent of fractional Brownian motion. Our analysis of experimental cell tracks of the amoeba Dictyostelium discoideum showed a persistent movement for the majority of tracks. Employing a sliding window approach, we estimated the variations of the Hurst exponent over time, which allowed us to identify points in time, where the correlation structure was distorted ("outliers"). Coarse graining of track data via down-sampling allowed us to identify the dependence of persistence on the spatial scale. While one would expect the (mode of the) Hurst exponent to be constant on different temporal scales due to the self-similarity property of fractional Brownian motion, we observed a trend towards stronger persistence for the down-sampled cell tracks indicating stronger persistence on larger time scales.

  9. Self-motion impairs multiple-object tracking.

    PubMed

    Thomas, Laura E; Seiffert, Adriane E

    2010-10-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.

  10. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin

    Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures andmore » ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28%  ±  1.46%) and margin error (0.49  ±  0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. Conclusions: The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.« less

  11. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy.

    PubMed

    Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa

    2016-08-01

    For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28%  ±  1.46%) and margin error (0.49  ±  0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.

  12. Poster - 51: A tumor motion-compensating system with tracking and prediction – a proof-of-concept study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Kaiming; Teo, Peng; Kawalec, Philip

    2016-08-15

    Purpose: This work reports on the development of a mechanical slider system for the counter-steering of tumor motion in adaptive Radiation Therapy (RT). The tumor motion was tracked using a weighted optical flow algorithm and its position is being predicted with a neural network (NN). Methods: The components of the proposed mechanical counter-steering system includes: (1) an actuator which provides the tumor motion, (2) the motion detection using an optical flow algorithm, (3) motion prediction using a neural network, (4) a control module and (5) a mechanical slider to counter-steer the anticipated motion of the tumor phantom. An asymmetrical cosinemore » function and five patient traces (P1–P5) were used to evaluate the tracking of a 3D printed lung tumor. In the proposed mechanical counter-steering system, both actuator (Zaber NA14D60) and slider (Zaber A-BLQ0070-E01) were programed to move independently with LabVIEW and their positions were recorded by 2 potentiometers (ETI LCP12S-25). The accuracy of this counter-steering system is given by the difference between the two potentiometers. Results: The inherent accuracy of the system, measured using the cosine function, is −0.15 ± 0.06 mm. While the errors when tracking and prediction were included, is (0.04 ± 0.71) mm. Conclusion: A prototype tumor motion counter-steering system with tracking and prediction was implemented. The inherent errors are small in comparison to the tracking and prediction errors, which in turn are small in comparison to the magnitude of tumor motion. The results show that this system is suited for evaluating RT tracking and prediction.« less

  13. Human Body Parts Tracking and Kinematic Features Assessment Based on RSSI and Inertial Sensor Measurements

    PubMed Central

    Blumrosen, Gaddi; Luttwak, Ami

    2013-01-01

    Acquisition of patient kinematics in different environments plays an important role in the detection of risk situations such as fall detection in elderly patients, in rehabilitation of patients with injuries, and in the design of treatment plans for patients with neurological diseases. Received Signal Strength Indicator (RSSI) measurements in a Body Area Network (BAN), capture the signal power on a radio link. The main aim of this paper is to demonstrate the potential of utilizing RSSI measurements in assessment of human kinematic features, and to give methods to determine these features. RSSI measurements can be used for tracking different body parts' displacements on scales of a few centimeters, for classifying motion and gait patterns instead of inertial sensors, and to serve as an additional reference to other sensors, in particular inertial sensors. Criteria and analytical methods for body part tracking, kinematic motion feature extraction, and a Kalman filter model for aggregation of RSSI and inertial sensor were derived. The methods were verified by a set of experiments performed in an indoor environment. In the future, the use of RSSI measurements can help in continuous assessment of various kinematic features of patients during their daily life activities and enhance medical diagnosis accuracy with lower costs. PMID:23979481

  14. Human body parts tracking and kinematic features assessment based on RSSI and inertial sensor measurements.

    PubMed

    Blumrosen, Gaddi; Luttwak, Ami

    2013-08-23

    Acquisition of patient kinematics in different environments plays an important role in the detection of risk situations such as fall detection in elderly patients, in rehabilitation of patients with injuries, and in the design of treatment plans for patients with neurological diseases. Received Signal Strength Indicator (RSSI) measurements in a Body Area Network (BAN), capture the signal power on a radio link. The main aim of this paper is to demonstrate the potential of utilizing RSSI measurements in assessment of human kinematic features, and to give methods to determine these features. RSSI measurements can be used for tracking different body parts' displacements on scales of a few centimeters, for classifying motion and gait patterns instead of inertial sensors, and to serve as an additional reference to other sensors, in particular inertial sensors. Criteria and analytical methods for body part tracking, kinematic motion feature extraction, and a Kalman filter model for aggregation of RSSI and inertial sensor were derived. The methods were verified by a set of experiments performed in an indoor environment. In the future, the use of RSSI measurements can help in continuous assessment of various kinematic features of patients during their daily life activities and enhance medical diagnosis accuracy with lower costs.

  15. A novel framework for intelligent surveillance system based on abnormal human activity detection in academic environments.

    PubMed

    Al-Nawashi, Malek; Al-Hazaimeh, Obaida M; Saraee, Mohamad

    2017-01-01

    Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.

  16. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, A; Matrosic, C; Zagzebski, J

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulatedmore » motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH grant R01CA190298.« less

  17. Reference respiratory waveforms by minimum jerk model analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka

    Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimummore » jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy affected by respiratory phase was improved in the minimum jerk theoretical model by 7.0% and 13% compared with that of the waveforms modeled by cosine and free-breathing model, respectively. Conclusions: The minimum jerk theoretical respiratory wave can achieve smooth tracking by CyberKnife{sup ®} and may provide patient-specific respiratory modeling, which may be useful for respiratory training and coaching, as well as quality assurance of the mechanical CyberKnife{sup ®} robotic trajectory.« less

  18. Adaptive vehicle motion estimation and prediction

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  19. Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.

    PubMed

    Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D

    2011-05-01

    Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.

  20. SU-E-J-42: Evaluation of Fiducial Markers for Ultrasound and X-Ray Images Used for Motion Tracking in Pancreas SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, SK; Armour, E; Su, L

    Purpose Ultrasound tracking of target motion relies on visibility of vascular and/or anatomical landmark. However this is challenging when the target is located far from vascular structures or in organs that lack ultrasound landmark structure, such as in the case of pancreas cancer. The purpose of this study is to evaluate visibility, artifacts and distortions of fusion coils and solid gold markers in ultrasound, CT, CBCT and kV images to identify markers suitable for real-time ultrasound tracking of tumor motion in SBRT pancreas treatment. Methods Two fusion coils (1mm × 5mm and 1mm × 10 mm) and a solid goldmore » marker (0.8mm × 10mm) were embedded in a tissue–like ultrasound phantom. The phantom (5cm × 12cm × 20cm) was prepared using water, gelatin and psyllium-hydrophilic-mucilloid fiber. Psylliumhydrophilic mucilloid acts as scattering medium to produce echo texture that simulates sonographic appearance of human tissue in ultrasound images while maintaining electron density close to that of water in CT images. Ultrasound images were acquired using 3D-ultrasound system with markers embedded at 5, 10 and 15mm depth from phantom surface. CT images were acquired using Philips Big Bore CT while CBCT and kV images were acquired with XVI-system (Elexta). Visual analysis was performed to compare visibility of the markers and visibility score (1 to 3) were assigned. Results All markers embedded at various depths are clearly visible (score of 3) in ultrasound images. Good visibility of all markers is observed in CT, CBCT and kV images. The degree of artifact produced by the markers in CT and CBCT images are indistinguishable. No distortion is observed in images from any modalities. Conclusion All markers are visible in images across all modalities in this homogenous tissue-like phantom. Human subject data is necessary to confirm the marker type suitable for real-time ultrasound tracking of tumor motion in SBRT pancreas treatment.« less

  1. Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

    PubMed Central

    Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.

    2012-01-01

    We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068

  2. A real-time dynamic-MLC control algorithm for delivering IMRT to targets undergoing 2D rigid motion in the beam's eye view.

    PubMed

    McMahon, Ryan; Berbeco, Ross; Nishioka, Seiko; Ishikawa, Masayori; Papiez, Lech

    2008-09-01

    An MLC control algorithm for delivering intensity modulated radiation therapy (IMRT) to targets that are undergoing two-dimensional (2D) rigid motion in the beam's eye view (BEV) is presented. The goal of this method is to deliver 3D-derived fluence maps over a moving patient anatomy. Target motion measured prior to delivery is first used to design a set of planned dynamic-MLC (DMLC) sliding-window leaf trajectories. During actual delivery, the algorithm relies on real-time feedback to compensate for target motion that does not agree with the motion measured during planning. The methodology is based on an existing one-dimensional (ID) algorithm that uses on-the-fly intensity calculations to appropriately adjust the DMLC leaf trajectories in real-time during exposure delivery [McMahon et al., Med. Phys. 34, 3211-3223 (2007)]. To extend the 1D algorithm's application to 2D target motion, a real-time leaf-pair shifting mechanism has been developed. Target motion that is orthogonal to leaf travel is tracked by appropriately shifting the positions of all MLC leaves. The performance of the tracking algorithm was tested for a single beam of a fractionated IMRT treatment, using a clinically derived intensity profile and a 2D target trajectory based on measured patient data. Comparisons were made between 2D tracking, 1D tracking, and no tracking. The impact of the tracking lag time and the frequency of real-time imaging were investigated. A study of the dependence of the algorithm's performance on the level of agreement between the motion measured during planning and delivery was also included. Results demonstrated that tracking both components of the 2D motion (i.e., parallel and orthogonal to leaf travel) results in delivered fluence profiles that are superior to those that track the component of motion that is parallel to leaf travel alone. Tracking lag time effects may lead to relatively large intensity delivery errors compared to the other sources of error investigated. However, the algorithm presented is robust in the sense that it does not rely on a high level of agreement between the target motion measured during treatment planning and delivery.

  3. Cue-dependent memory-based smooth-pursuit in normal human subjects: importance of extra-retinal mechanisms for initial pursuit.

    PubMed

    Ito, Norie; Barnes, Graham R; Fukushima, Junko; Fukushima, Kikuro; Warabi, Tateo

    2013-08-01

    Using a cue-dependent memory-based smooth-pursuit task previously applied to monkeys, we examined the effects of visual motion-memory on smooth-pursuit eye movements in normal human subjects and compared the results with those of the trained monkeys. These results were also compared with those during simple ramp-pursuit that did not require visual motion-memory. During memory-based pursuit, all subjects exhibited virtually no errors in either pursuit-direction or go/no-go selection. Tracking eye movements of humans and monkeys were similar in the two tasks, but tracking eye movements were different between the two tasks; latencies of the pursuit and corrective saccades were prolonged, initial pursuit eye velocity and acceleration were lower, peak velocities were lower, and time to reach peak velocities lengthened during memory-based pursuit. These characteristics were similar to anticipatory pursuit initiated by extra-retinal components during the initial extinction task of Barnes and Collins (J Neurophysiol 100:1135-1146, 2008b). We suggest that the differences between the two tasks reflect differences between the contribution of extra-retinal and retinal components. This interpretation is supported by two further studies: (1) during popping out of the correct spot to enhance retinal image-motion inputs during memory-based pursuit, pursuit eye velocities approached those during simple ramp-pursuit, and (2) during initial blanking of spot motion during memory-based pursuit, pursuit components appeared in the correct direction. Our results showed the importance of extra-retinal mechanisms for initial pursuit during memory-based pursuit, which include priming effects and extra-retinal drive components. Comparison with monkey studies on neuronal responses and model analysis suggested possible pathways for the extra-retinal mechanisms.

  4. Automatic cloud tracking applied to GOES and Meteosat observations

    NASA Technical Reports Server (NTRS)

    Endlich, R. M.; Wolf, D. E.

    1981-01-01

    An improved automatic processing method for the tracking of cloud motions as revealed by satellite imagery is presented and applications of the method to GOES observations of Hurricane Eloise and Meteosat water vapor and infrared data are presented. The method is shown to involve steps of picture smoothing, target selection and the calculation of cloud motion vectors by the matching of a group at a given time with its best likeness at a later time, or by a cross-correlation computation. Cloud motion computations can be made in as many as four separate layers simultaneously. For data of 4 and 8 km resolution in the eye of Hurricane Eloise, the automatic system is found to provide results comparable in accuracy and coverage to those obtained by NASA analysts using the Atmospheric and Oceanographic Information Processing System, with results obtained by the pattern recognition and cross correlation computations differing by only fractions of a pixel. For Meteosat water vapor data from the tropics and midlatitudes, the automatic motion computations are found to be reliable only in areas where the water vapor fields contained small-scale structure, although excellent results are obtained using Meteosat IR data in the same regions. The automatic method thus appears to be competitive in accuracy and coverage with motion determination by human analysts.

  5. Tracking and imaging humans on heterogeneous infrared sensor arrays for law enforcement applications

    NASA Astrophysics Data System (ADS)

    Feller, Steven D.; Zheng, Y.; Cull, Evan; Brady, David J.

    2002-08-01

    We present a plan for the integration of geometric constraints in the source, sensor and analysis levels of sensor networks. The goal of geometric analysis is to reduce the dimensionality and complexity of distributed sensor data analysis so as to achieve real-time recognition and response to significant events. Application scenarios include biometric tracking of individuals, counting and analysis of individuals in groups of humans and distributed sentient environments. We are particularly interested in using this approach to provide networks of low cost point detectors, such as infrared motion detectors, with complex imaging capabilities. By extending the capabilities of simple sensors, we expect to reduce the cost of perimeter and site security applications.

  6. Operational tracking of lava lake surface motion at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.

    2018-03-08

    Surface motion is an important component of lava lake behavior, but previous studies of lake motion have been focused on short time intervals. In this study, we implement the first continuous, real-time operational routine for tracking lava lake surface motion, applying the technique to the persistent lava lake in Halema‘uma‘u Crater at the summit of Kīlauea Volcano, Hawai‘i. We measure lake motion by using images from a fixed thermal camera positioned on the crater rim, transmitting images to the Hawaiian Volcano Observatory (HVO) in real time. We use an existing optical flow toolbox in Matlab to calculate motion vectors, and we track the position of lava upwelling in the lake, as well as the intensity of spattering on the lake surface. Over the past 2 years, real-time tracking of lava lake surface motion at Halema‘uma‘u has been an important part of monitoring the lake’s activity, serving as another valuable tool in the volcano monitoring suite at HVO.

  7. SU-D-210-05: The Accuracy of Raw and B-Mode Image Data for Ultrasound Speckle Tracking in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Shea, T; Bamber, J; Harris, E

    Purpose: For ultrasound speckle tracking there is some evidence that the envelope-detected signal (the main step in B-mode image formation) may be more accurate than raw ultrasound data for tracking larger inter-frame tissue motion. This study investigates the accuracy of raw radio-frequency (RF) versus non-logarithmic compressed envelope-detected (B-mode) data for ultrasound speckle tracking in the context of image-guided radiation therapy. Methods: Transperineal ultrasound RF data was acquired (with a 7.5 MHz linear transducer operating at a 12 Hz frame rate) from a speckle phantom moving with realistic intra-fraction prostate motion derived from a commercial tracking system. A normalised cross-correlation templatemore » matching algorithm was used to track speckle motion at the focus using (i) the RF signal and (ii) the B-mode signal. A range of imaging rates (0.5 to 12 Hz) were simulated by decimating the imaging sequences, therefore simulating larger to smaller inter-frame displacements. Motion estimation accuracy was quantified by comparison with known phantom motion. Results: The differences between RF and B-mode motion estimation accuracy (2D mean and 95% errors relative to ground truth displacements) were less than 0.01 mm for stable and persistent motion types and 0.2 mm for transient motion for imaging rates of 0.5 to 12 Hz. The mean correlation for all motion types and imaging rates was 0.851 and 0.845 for RF and B-mode data, respectively. Data type is expected to have most impact on axial (Superior-Inferior) motion estimation. Axial differences were <0.004 mm for stable and persistent motion and <0.3 mm for transient motion (axial mean errors were lowest for B-mode in all cases). Conclusions: Using the RF or B-mode signal for speckle motion estimation is comparable for translational prostate motion. B-mode image formation may involve other signal-processing steps which also influence motion estimation accuracy. A similar study for respiratory-induced motion would also be prudent. This work is support by Cancer Research UK Programme Grant C33589/A19727.« less

  8. New Exoskeleton Arm Concept Design And Actuation For Haptic Interaction With Virtual Objects

    NASA Astrophysics Data System (ADS)

    Chakarov, D.; Veneva, I.; Tsveov, M.; Tiankov, T.

    2014-12-01

    In the work presented in this paper the conceptual design and actuation of one new exoskeleton of the upper limb is presented. The device is designed for application where both motion tracking and force feedback are required, such as human interaction with virtual environment or rehabilitation tasks. The choice is presented of mechanical structure kinematical equivalent to the structure of the human arm. An actuation system is selected based on braided pneumatic muscle actuators. Antagonistic drive system for each joint is shown, using pulley and cable transmissions. Force/displacement diagrams are presented of two antagonistic acting muscles. Kinematics and dynamic estimations are performed of the system exoskeleton and upper limb. Selected parameters ensure in the antagonistic scheme joint torque regulation and human arm range of motion.

  9. Haptic communication between humans is tuned by the hard or soft mechanics of interaction

    PubMed Central

    Usai, Francesco; Ganesh, Gowrishankar; Sanguineti, Vittorio; Burdet, Etienne

    2018-01-01

    To move a hard table together, humans may coordinate by following the dominant partner’s motion [1–4], but this strategy is unsuitable for a soft mattress where the perceived forces are small. How do partners readily coordinate in such differing interaction dynamics? To address this, we investigated how pairs tracked a target using flexion-extension of their wrists, which were coupled by a hard, medium or soft virtual elastic band. Tracking performance monotonically increased with a stiffer band for the worse partner, who had higher tracking error, at the cost of the skilled partner’s muscular effort. This suggests that the worse partner followed the skilled one’s lead, but simulations show that the results are better explained by a model where partners share movement goals through the forces, whilst the coupling dynamics determine the capacity of communicable information. This model elucidates the versatile mechanism by which humans can coordinate during both hard and soft physical interactions to ensure maximum performance with minimal effort. PMID:29565966

  10. A Hierarchical Approach to Target Recognition and Tracking. Summary of Results for the Period April 1, 1989-November 30, 1989

    DTIC Science & Technology

    1990-02-07

    performance assessment, human intervention, or operator training. Algorithms on different levels are allowed to deal with the world with different degrees...have on the decisions made by the driver are a complex combination of human factors, driving experience, mission objectives, tactics, etc., and...motion. The distinction here is that the decision making program may I 12 1 I not necessarily make its decisions based on the same factors as the human

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  12. Control of a HexaPOD treatment couch for robot-assisted radiotherapy.

    PubMed

    Hermann, Christian; Ma, Lei; Wilbert, Jürgen; Baier, Kurt; Schilling, Klaus

    2012-10-01

    Moving tumors, for example in the vicinity of the lungs, pose a challenging problem in radiotherapy, as healthy tissue should not be irradiated. Apart from gating approaches, one standard method is to irradiate the complete volume within which a tumor moves plus a safety margin containing a considerable volume of healthy tissue. This work deals with a system for tumor motion compensation using the HexaPOD® robotic treatment couch (Medical Intelligence GmbH, Schwabmünchen, Germany). The HexaPOD, carrying the patient during treatment, is instructed to perform translational movements such that the tumor motion, from the beams-eye view of the linear accelerator, is eliminated. The dynamics of the HexaPOD are characterized by time delays, saturations, and other non-linearities that make the design of control a challenging task. The focus of this work lies on two control methods for the HexaPOD that can be used for reference tracking. The first method uses a model predictive controller based on a model gained through system identification methods, and the second method uses a position control scheme useful for reference tracking. We compared the tracking performance of both methods in various experiments with real hardware using ideal reference trajectories, prerecorded patient trajectories, and human volunteers whose breathing motion was compensated by the system.

  13. Technical aspects of real time positron emission tracking for gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberland, Marc; Xu, Tong, E-mail: txu@physics.carleton.ca; McEwen, Malcolm R.

    2016-02-15

    Purpose: Respiratory motion can lead to treatment errors in the delivery of radiotherapy treatments. Respiratory gating can assist in better conforming the beam delivery to the target volume. We present a study of the technical aspects of a real time positron emission tracking system for potential use in gated radiotherapy. Methods: The tracking system, called PeTrack, uses implanted positron emission markers and position sensitive gamma ray detectors to track breathing motion in real time. PeTrack uses an expectation–maximization algorithm to track the motion of fiducial markers. A normalized least mean squares adaptive filter predicts the location of the markers amore » short time ahead to account for system response latency. The precision and data collection efficiency of a prototype PeTrack system were measured under conditions simulating gated radiotherapy. The lung insert of a thorax phantom was translated in the inferior–superior direction with regular sinusoidal motion and simulated patient breathing motion (maximum amplitude of motion ±10 mm, period 4 s). The system tracked the motion of a {sup 22}Na fiducial marker (0.34 MBq) embedded in the lung insert every 0.2 s. The position of the was marker was predicted 0.2 s ahead. For sinusoidal motion, the equation used to model the motion was fitted to the data. The precision of the tracking was estimated as the standard deviation of the residuals. Software was also developed to communicate with a Linac and toggle beam delivery. In a separate experiment involving a Linac, 500 monitor units of radiation were delivered to the phantom with a 3 × 3 cm photon beam and with 6 and 10 MV accelerating potential. Radiochromic films were inserted in the phantom to measure spatial dose distribution. In this experiment, the period of motion was set to 60 s to account for beam turn-on latency. The beam was turned off when the marker moved outside of a 5-mm gating window. Results: The precision of the tracking in the IS direction was 0.53 mm for a sinusoidally moving target, with an average count rate ∼250 cps. The average prediction error was 1.1 ± 0.6 mm when the marker moved according to irregular patient breathing motion. Across all beam deliveries during the radiochromic film measurements, the average prediction error was 0.8 ± 0.5 mm. The maximum error was 2.5 mm and the 95th percentile error was 1.5 mm. Clear improvement of the dose distribution was observed between gated and nongated deliveries. The full-width at halfmaximum of the dose profiles of gated deliveries differed by 3 mm or less than the static reference dose distribution. Monitoring of the beam on/off times showed synchronization with the location of the marker within the latency of the system. Conclusions: PeTrack can track the motion of internal fiducial positron emission markers with submillimeter precision. The system can be used to gate the delivery of a Linac beam based on the position of a moving fiducial marker. This highlights the potential of the system for use in respiratory-gated radiotherapy.« less

  14. A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors

    PubMed Central

    Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.

    2017-01-01

    Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563

  15. Apparatus and method for tracking a molecule or particle in three dimensions

    DOEpatents

    Werner, James H [Los Alamos, NM; Goodwin, Peter M [Los Alamos, NM; Lessard, Guillaume [Santa Fe, NM

    2009-03-03

    An apparatus and method were used to track the movement of fluorescent particles in three dimensions. Control software was used with the apparatus to implement a tracking algorithm for tracking the motion of the individual particles in glycerol/water mixtures. Monte Carlo simulations suggest that the tracking algorithms in combination with the apparatus may be used for tracking the motion of single fluorescent or fluorescently labeled biomolecules in three dimensions.

  16. Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal.

    PubMed

    Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D; Hubbi, Basil; Liu, Xuan

    2015-11-01

    Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue.

  17. Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal

    PubMed Central

    Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D.; Hubbi, Basil; Liu, Xuan

    2015-01-01

    Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue. PMID:26600996

  18. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  19. Scripting human animations in a virtual environment

    NASA Technical Reports Server (NTRS)

    Goldsby, Michael E.; Pandya, Abhilash K.; Maida, James C.

    1994-01-01

    The current deficiencies of virtual environment (VE) are well known: annoying lag time in drawing the current view, drastically simplified environments to reduce that time lag, low resolution and narrow field of view. Animation scripting is an application of VE technology which can be carried out successfully despite these deficiencies. The final product is a smoothly moving high resolution animation displaying detailed models. In this system, the user is represented by a human computer model with the same body proportions. Using magnetic tracking, the motions of the model's upper torso, head and arms are controlled by the user's movements (18 degrees of freedom). The model's lower torso and global position and orientation are controlled by a spaceball and keypad (12 degrees of freedom). Using this system human motion scripts can be extracted from the user's movements while immersed in a simplified virtual environment. Recorded data is used to define key frames; motion is interpolated between them and post processing adds a more detailed environment. The result is a considerable savings in time and a much more natural-looking movement of a human figure in a smooth and seamless animation.

  20. SU-G-BRA-08: Diaphragm Motion Tracking Based On KV CBCT Projections with a Constrained Linear Regression Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, J; Chao, M

    2016-06-15

    Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less

  1. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    PubMed

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  2. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera: a level sets PDEs approach with concurrent camera motion compensation.

    PubMed

    Feghali, Rosario; Mitiche, Amar

    2004-11-01

    The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.

  3. Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction

    PubMed Central

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  4. The Impact of Salient Advertisements on Reading and Attention on Web Pages

    ERIC Educational Resources Information Center

    Simola, Jaana; Kuisma, Jarmo; Oorni, Anssi; Uusitalo, Liisa; Hyona, Jukka

    2011-01-01

    Human vision is sensitive to salient features such as motion. Therefore, animation and onset of advertisements on Websites may attract visual attention and disrupt reading. We conducted three eye tracking experiments with authentic Web pages to assess whether (a) ads are efficiently ignored, (b) ads attract overt visual attention and disrupt…

  5. 3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image (Open Access)

    DTIC Science & Technology

    2013-06-28

    accurate tracking and identity associations of people’s motions in videos. Proxemics is a subfield of anthropology that involves study of people...cinematography where the shot composition and camera viewpoint is optimized for visual weight [1]. In cinema , a shot is either a long shot, a medium

  6. Internal Motion Estimation by Internal-external Motion Modeling for Lung Cancer Radiotherapy.

    PubMed

    Chen, Haibin; Zhong, Zichun; Yang, Yiwei; Chen, Jiawei; Zhou, Linghong; Zhen, Xin; Gu, Xuejun

    2018-02-27

    The aim of this study is to develop an internal-external correlation model for internal motion estimation for lung cancer radiotherapy. Deformation vector fields that characterize the internal-external motion are obtained by respectively registering the internal organ meshes and external surface meshes from the 4DCT images via a recently developed local topology preserved non-rigid point matching algorithm. A composite matrix is constructed by combing the estimated internal phasic DVFs with external phasic and directional DVFs. Principle component analysis is then applied to the composite matrix to extract principal motion characteristics, and generate model parameters to correlate the internal-external motion. The proposed model is evaluated on a 4D NURBS-based cardiac-torso (NCAT) synthetic phantom and 4DCT images from five lung cancer patients. For tumor tracking, the center of mass errors of the tracked tumor are 0.8(±0.5)mm/0.8(±0.4)mm for synthetic data, and 1.3(±1.0)mm/1.2(±1.2)mm for patient data in the intra-fraction/inter-fraction tracking, respectively. For lung tracking, the percent errors of the tracked contours are 0.06(±0.02)/0.07(±0.03) for synthetic data, and 0.06(±0.02)/0.06(±0.02) for patient data in the intra-fraction/inter-fraction tracking, respectively. The extensive validations have demonstrated the effectiveness and reliability of the proposed model in motion tracking for both the tumor and the lung in lung cancer radiotherapy.

  7. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  9. Dynamic virtual fixture on the Euclidean group for admittance-type manipulator in deforming environments.

    PubMed

    Zhang, Dongwen; Zhu, Qingsong; Xiong, Jing; Wang, Lei

    2014-04-27

    In a deforming anatomic environment, the motion of an instrument suffers from complex geometrical and dynamic constraints, robot assisted minimally invasive surgery therefore requires more sophisticated skills for surgeons. This paper proposes a novel dynamic virtual fixture (DVF) to enhance the surgical operation accuracy of admittance-type medical robotics in the deforming environment. A framework for DVF on the Euclidean Group SE(3) is presented, which unites rotation and translation in a compact form. First, we constructed the holonomic/non-holonomic constraints, and then searched for the corresponded reference to make a distinction between preferred and non-preferred directions. Second, different control strategies are employed to deal with the tasks along the distinguished directions. The desired spatial compliance matrix is synthesized from an allowable motion screw set to filter out the task unrelated components from manual input, the operator has complete control over the preferred directions; while the relative motion between the surgical instrument and the anatomy structures is actively tracked and cancelled, the deviation relative to the reference is compensated jointly by the operator and DVF controllers. The operator, haptic device, admittance-type proxy and virtual deforming environment are involved in a hardware-in-the-loop experiment, human-robot cooperation with the assistance of DVF controller is carried out on a deforming sphere to simulate beating heart surgery, performance of the proposed DVF on admittance-type proxy is evaluated, and both human factors and control parameters are analyzed. The DVF can improve the dynamic properties of human-robot cooperation in a low-frequency (0 ~ 40 rad/sec) deforming environment, and maintain synergy of orientation and translation during the operation. Statistical analysis reveals that the operator has intuitive control over the preferred directions, human and the DVF controller jointly control the motion along the non-preferred directions, the target deformation is tracked actively. The proposed DVF for an admittance-type manipulator is capable of assisting the operator to deal with skilled operations in a deforming environment.

  10. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness

    PubMed Central

    Spering, Miriam; Carrasco, Marisa

    2012-01-01

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238

  11. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness.

    PubMed

    Spering, Miriam; Carrasco, Marisa

    2012-05-30

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.

  12. Monitors Track Vital Signs for Fitness and Safety

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Have you ever felt nauseous reading a book in the back seat of a car? Or woken from a deep sleep feeling disoriented, unsure which way is up? Momentary mixups like these happen when the sensory systems that track the body's orientation in space become confused. (In the case of the backseat bookworm, the conflict arises when the reader s inner ear, part of the body s vestibular system, senses the car s motion while her eyes are fixed on the stationary pages of the book.) Conditions like motion sickness are common on Earth, but they also present a significant challenge to astronauts in space. Human sensory systems use the pull of gravity to help determine orientation. In the microgravity environment onboard the International Space Station, for example, the body experiences a period of confusion before it adapts to the new circumstances. (In space, even the body s proprioceptive system, which tells the brain where the arms and legs are oriented without the need for visual confirmation, goes haywire, meaning astronauts sometimes lose track of where their limbs are when they are not moving them.) This Space Adaptation Syndrome affects a majority of astronauts, even experienced ones, causing everything from mild disorientation to nausea to severe vomiting. "It can be quite debilitating," says William Toscano, a research scientist in NASA s Ames Research Center Psychophysiology Laboratory, part of the Center s Human Systems Integration Division. "When this happens, as you can imagine, work proficiency declines considerably." Since astronauts cannot afford to be distracted or incapacitated during critical missions, NASA has explored various means for preventing and countering motion sickness in space, including a range of drug treatments. Many effective motion sickness drugs, however, cause undesirable side effects, such as drowsiness. Toscano and his NASA colleague, Patricia Cowings, have developed a different approach: Utilizing biofeedback training methods, the pair can teach astronauts, military pilots, and others susceptible to motion sickness to self-regulate their own physiological responses and suppress the unpleasant symptoms. This NASA-patented method invented by Cowings is called the Autogenic Feedback Training Exercise (ATFE), and several studies have demonstrated its promise

  13. Computer-aided target tracking in motion analysis studies

    NASA Astrophysics Data System (ADS)

    Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.

    1990-08-01

    Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.

  14. Predicting 2D target velocity cannot help 2D motion integration for smooth pursuit initiation.

    PubMed

    Montagnini, Anna; Spering, Miriam; Masson, Guillaume S

    2006-12-01

    Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.

  15. Can low-cost motion-tracking systems substitute a Polhemus system when researching social motor coordination in children?

    PubMed

    Romero, Veronica; Amaral, Joseph; Fitzpatrick, Paula; Schmidt, R C; Duncan, Amie W; Richardson, Michael J

    2017-04-01

    Functionally stable and robust interpersonal motor coordination has been found to play an integral role in the effectiveness of social interactions. However, the motion-tracking equipment required to record and objectively measure the dynamic limb and body movements during social interaction has been very costly, cumbersome, and impractical within a non-clinical or non-laboratory setting. Here we examined whether three low-cost motion-tracking options (Microsoft Kinect skeletal tracking of either one limb or whole body and a video-based pixel change method) can be employed to investigate social motor coordination. Of particular interest was the degree to which these low-cost methods of motion tracking could be used to capture and index the coordination dynamics that occurred between a child and an experimenter for three simple social motor coordination tasks in comparison to a more expensive, laboratory-grade motion-tracking system (i.e., a Polhemus Latus system). Overall, the results demonstrated that these low-cost systems cannot substitute the Polhemus system in some tasks. However, the lower-cost Microsoft Kinect skeletal tracking and video pixel change methods were successfully able to index differences in social motor coordination in tasks that involved larger-scale, naturalistic whole body movements, which can be cumbersome and expensive to record with a Polhemus. However, we found the Kinect to be particularly vulnerable to occlusion and the pixel change method to movements that cross the video frame midline. Therefore, particular care needs to be taken in choosing the motion-tracking system that is best suited for the particular research.

  16. TU-D-202-03: Gating Is the Best ITV Killer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, D.

    Respiratory motion has long been recognized as an important factor affecting the precision of radiotherapy. After the introduction of the 4D CT to visualize the respiratory motion in 3D, the internal target volume (ITV) has been widely adopted as simple method to take the motion into account in treatment planning and delivery. The ITV is generated as the union of the CTVs as the patient goes through the respiratory cycle. Many issues have been identified with the ITV. In this session three alternatives for the ITV will be discussed: 1) An alternative motion-inclusive approach with better imaging and smaller margins,more » called mid-position CT. 2) The tracking approach and 3) The gating approach. The following topics will be addressed by Marcel van Herk (“Is ITV the correct motion encompassing strategy”): Magnitude of respiratory motion, effect of motion on radiotherapy, motion encompassing strategies, and software solutions to assist in motion encompassing strategies. Then Paul Keall (“Make margins simple: Use real-time target tracking”) will discuss tracking with: clinical drivers for tracking, current clinical status of tumor tracking, future tumor tracking technology, and margin margin challenges with and without tracking. Finally Daniel Low will discuss gating (“Gating is the best ITV killer”): why ITV in the first place, requirements for planning, requirements at the machine, benefits and costs. The session will end with a discussion and live demo of motion simulation software to illustrate the issues and explain the relative benefit and appropriate uses for the three methods. Learning Objectives: Explain the 4D imaging and treatment planning process. Summarize the various approaches to deal with respiratory motion during radiotherapy Discuss the tradeoffs involved when choosing one of the three discussed approaches. Explain in which situation each method is the best choice Research is partly funded by Elekta Oncology Systems and the Dutch Cancer Foundation; M. van Herk, Part of the research was funded by Elekta Oncology Systems and the Dutch Cancer Foundation.« less

  17. An MRI-Compatible Robotic System With Hybrid Tracking for MRI-Guided Prostate Intervention

    PubMed Central

    Krieger, Axel; Iordachita, Iulian I.; Guion, Peter; Singh, Anurag K.; Kaushal, Aradhana; Ménard, Cynthia; Pinto, Peter A.; Camphausen, Kevin; Fichtinger, Gabor

    2012-01-01

    This paper reports the development, evaluation, and first clinical trials of the access to the prostate tissue (APT) II system—a scanner independent system for magnetic resonance imaging (MRI)-guided transrectal prostate interventions. The system utilizes novel manipulator mechanics employing a steerable needle channel and a novel six degree-of-freedom hybrid tracking method, comprising passive fiducial tracking for initial registration and subsequent incremental motion measurements. Targeting accuracy of the system in prostate phantom experiments and two clinical human-subject procedures is shown to compare favorably with existing systems using passive and active tracking methods. The portable design of the APT II system, using only standard MRI image sequences and minimal custom scanner interfacing, allows the system to be easily used on different MRI scanners. PMID:22009867

  18. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  19. Refraction-compensated motion tracking of unrestrained small animals in positron emission tomography.

    PubMed

    Kyme, Andre; Meikle, Steven; Baldock, Clive; Fulton, Roger

    2012-08-01

    Motion-compensated radiotracer imaging of fully conscious rodents represents an important paradigm shift for preclinical investigations. In such studies, if motion tracking is performed through a transparent enclosure containing the awake animal, light refraction at the interface will introduce errors in stereo pose estimation. We have performed a thorough investigation of how this impacts the accuracy of pose estimates and the resulting motion correction, and developed an efficient method to predict and correct for refraction-based error. The refraction model underlying this study was validated using a state-of-the-art motion tracking system. Refraction-based error was shown to be dependent on tracking marker size, working distance, and interface thickness and tilt. Correcting for refraction error improved the spatial resolution and quantitative accuracy of motion-corrected positron emission tomography images. Since the methods are general, they may also be useful in other contexts where data are corrupted by refraction effects. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  20. Tracking prominent points in image sequences

    NASA Astrophysics Data System (ADS)

    Hahn, Michael

    1994-03-01

    Measuring image motion and inferring scene geometry and camera motion are main aspects of image sequence analysis. The determination of image motion and the structure-from-motion problem are tasks that can be addressed independently or in cooperative processes. In this paper we focus on tracking prominent points. High stability, reliability, and accuracy are criteria for the extraction of prominent points. This implies that tracking should work quite well with those features; unfortunately, the reality looks quite different. In the experimental investigations we processed a long sequence of 128 images. This mono sequence is taken in an outdoor environment at the experimental field of Mercedes Benz in Rastatt. Different tracking schemes are explored and the results with respect to stability and quality are reported.

  1. Evaluation of Hands-On Clinical Exam Performance Using Marker-less Video Tracking.

    PubMed

    Azari, David; Pugh, Carla; Laufer, Shlomi; Cohen, Elaine; Kwan, Calvin; Chen, Chia-Hsiung Eric; Yen, Thomas Y; Hu, Yu Hen; Radwin, Robert

    2014-09-01

    This study investigates the potential of using marker-less video tracking of the hands for evaluating hands-on clinical skills. Experienced family practitioners attending a national conference were recruited and asked to conduct a breast examination on a simulator that simulates different clinical presentations. Videos were made of the clinician's hands during the exam and video processing software for tracking hand motion to quantify hand motion kinematics was used. Practitioner motion patterns indicated consistent behavior of participants across multiple pathologies. Different pathologies exhibited characteristic motion patterns in the aggregate at specific parts of an exam, indicating consistent inter-participant behavior. Marker-less video kinematic tracking therefore shows promise in discriminating between different examination procedures, clinicians, and pathologies.

  2. WE-G-213CD-06: Implementation of Real-Time Tumor Tracking Using Robotic Couch.

    PubMed

    Buzurovic, I; Yu, Y; Podder, T

    2012-06-01

    The purpose of this study was to present a novel method for real- time tumor tracking using a commercially available robotic treatment couch, and to evaluate tumor tracking accuracy. Commercially available robotic couches are capable of positioning patients with high level of accuracy; however, currently there is no provision for compensating tumor motion using these systems. Elekta's existing commercial couch (PreciseTM Table) was used without changing its design. To establish the real-time couch motion for tracking, a novel control system was developed and implemented. The tabletop could be moved in horizontal plane (laterally and longitudinally) using two Maxon-24V motors with gearbox combination. Vertical motion was obtained using robust 70V-Rockwell Automation motor. For vertical motor position sensing, we used Model 755A-Accu- Coder encoder. Two Baumer-ITD_01_4mm shaft encoders were used for the lateral and longitudinal motions of the couch. Motors were connected to the Advance Motion Controls (AMC) amplifiers: for the vertical motion, motor AMC-20A20-INV amplifier was used, and two AMC-Z6A8 amplifiers were applied for the lateral and longitudinal couch motions. The Galil DMC-4133 controller was connected to standard PC computer using USB port. The system had two independent power supplies: Galil PSR-12- 24-12A, 24vdc power supply with diodes for controller and 24vdc motors and amplifiers, and Galil-PS300W72 72vdc power supply for vertical motion. Control algorithms were developed for position and velocity adjustment. The system was tested for real-time tracking in the range of 50mm in all 3 directions (superior-inferior, lateral, anterior- posterior). Accuracies were 0.15, 0.20, and 0.18mm, respectively. Repeatability of the desired motion was within ± 0.2mm. Experimental results of couch tracking show feasibility of real-time tumor tracking with high level of accuracy (within sub-millimeter range). This tracking technique potentially offers a simple and effective method to minimize healthy tissues irradiation.Acknowledgement: Study supported by Elekta,Ltd. Study supported by Elekta, Ltd. © 2012 American Association of Physicists in Medicine.

  3. First Steps Toward Ultrasound-Based Motion Compensation for Imaging and Therapy: Calibration with an Optical System and 4D PET Imaging

    PubMed Central

    Schwaab, Julia; Kurz, Christopher; Sarti, Cristina; Bongers, André; Schoenahl, Frédéric; Bert, Christoph; Debus, Jürgen; Parodi, Katia; Jenne, Jürgen Walter

    2015-01-01

    Target motion, particularly in the abdomen, due to respiration or patient movement is still a challenge in many diagnostic and therapeutic processes. Hence, methods to detect and compensate this motion are required. Diagnostic ultrasound (US) represents a non-invasive and dose-free alternative to fluoroscopy, providing more information about internal target motion than respiration belt or optical tracking. The goal of this project is to develop an US-based motion tracking for real-time motion correction in radiation therapy and diagnostic imaging, notably in 4D positron emission tomography (PET). In this work, a workflow is established to enable the transformation of US tracking data to the coordinates of the treatment delivery or imaging system – even if the US probe is moving due to respiration. It is shown that the US tracking signal is equally adequate for 4D PET image reconstruction as the clinically used respiration belt and provides additional opportunities in this concern. Furthermore, it is demonstrated that the US probe being within the PET field of view generally has no relevant influence on the image quality. The accuracy and precision of all the steps in the calibration workflow for US tracking-based 4D PET imaging are found to be in an acceptable range for clinical implementation. Eventually, we show in vitro that an US-based motion tracking in absolute room coordinates with a moving US transducer is feasible. PMID:26649277

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petasecca, M., E-mail: marcop@uow.edu.au; Newall, M. K.; Aldosari, A. H.

    Purpose: Spatial and temporal resolutions are two of the most important features for quality assurance instrumentation of motion adaptive radiotherapy modalities. The goal of this work is to characterize the performance of the 2D high spatial resolution monolithic silicon diode array named “MagicPlate-512” for quality assurance of stereotactic body radiation therapy (SBRT) and stereotactic radiosurgery (SRS) combined with a dynamic multileaf collimator (MLC) tracking technique for motion compensation. Methods: MagicPlate-512 is used in combination with the movable platform HexaMotion and a research version of radiofrequency tracking system Calypso driving MLC tracking software. The authors reconstruct 2D dose distributions of smallmore » field square beams in three modalities: in static conditions, mimicking the temporal movement pattern of a lung tumor and tracking the moving target while the MLC compensates almost instantaneously for the tumor displacement. Use of Calypso in combination with MagicPlate-512 requires a proper radiofrequency interference shielding. Impact of the shielding on dosimetry has been simulated by GEANT4 and verified experimentally. Temporal and spatial resolutions of the dosimetry system allow also for accurate verification of segments of complex stereotactic radiotherapy plans with identification of the instant and location where a certain dose is delivered. This feature allows for retrospective temporal reconstruction of the delivery process and easy identification of error in the tracking or the multileaf collimator driving systems. A sliding MLC wedge combined with the lung motion pattern has been measured. The ability of the MagicPlate-512 (MP512) in 2D dose mapping in all three modes of operation was benchmarked by EBT3 film. Results: Full width at half maximum and penumbra of the moving and stationary dose profiles measured by EBT3 film and MagicPlate-512 confirm that motion has a significant impact on the dose distribution. Motion, no motion, and motion with MLC tracking profiles agreed within 1 and 0.4 mm, respectively, for all field sizes tested. Use of electromagnetic tracking system generates a fluctuation of the detector baseline up to 10% of the full scale signal requiring a proper shielding strategy. MagicPlate-512 is also able to reconstruct the dose variation pulse-by-pulse in each pixel of the detector. An analysis of the dose transients with motion and motion with tracking shows that the tracking feedback algorithm used for this experiment can compensate effectively only the effect of the slower transient components. The fast changing components of the organ motion can contribute only to discrepancy of the order of 15% in penumbral region while the slower components can change the dose profile up to 75% of the expected dose. Conclusions: MagicPlate-512 is shown to be, potentially, a valid alternative to film or 2D ionizing chambers for quality assurance dosimetry in SRS or SBRT. Its high spatial and temporal resolutions allow for accurate reconstruction of the profile in any conditions with motion and with tracking of the motion. It shows excellent performance to reconstruct the dose deposition in real time or retrospectively as a function of time for detailed analysis of the effect of motion in a specific pixel or area of interest.« less

  5. The effect of attention loading on the inhibition of choice reaction time to visual motion by concurrent rotary motion

    NASA Technical Reports Server (NTRS)

    Looper, M.

    1976-01-01

    This study investigates the influence of attention loading on the established intersensory effects of passive bodily rotation on choice reaction time (RT) to visual motion. Subjects sat at the center of rotation in an enclosed rotating chamber and observed an oscilloscope on which were, in the center, a tracking display and, 10 deg left of center, a RT line. Three tracking tasks and a no-tracking control condition were presented to all subjects in combination with the RT task, which occurred with and without concurrent cab rotations. Choice RT to line motions was inhibited (probability less than .001) both when there was simultaneous vestibular stimulation and when there was a tracking task; response latencies lengthened progressively with increased similarity between the RT and tracking tasks. However, the attention conditions did not affect the intersensory effect; the significance of this for the nature of the sensory interaction is discussed.

  6. A Methodology for Evaluating the Hygroscopic Behavior of Wood in Adaptive Building Skins using Motion Grammar

    NASA Astrophysics Data System (ADS)

    El-Dabaa, Rana; Abdelmohsen, Sherif

    2018-05-01

    The challenge in designing kinetic architecture lies in the lack of applying computational design and human computer interaction to successfully design intelligent and interactive interfaces. The use of ‘programmable materials’ as specifically fabricated composite materials that afford motion upon stimulation is promising for low-cost low-tech systems for kinetic facades in buildings. Despite efforts to develop working prototypes, there has been no clear methodological framework for understanding and controlling the behavior of programmable materials or for using them for such purposes. This paper introduces a methodology for evaluating the motion acquired from programmed material – resulting from the hygroscopic behavior of wood – through ‘motion grammar’. Motion grammar typically allows for the explanation of desired motion control in a computationally tractable method. The paper analyzed and evaluated motion parameters related to the hygroscopic properties and behavior of wood, and introduce a framework for tracking and controlling wood as a programmable material for kinetic architecture.

  7. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  8. Dazzle camouflage, target tracking, and the confusion effect.

    PubMed

    Hogan, Benedict G; Cuthill, Innes C; Scott-Samuel, Nicholas E

    2016-01-01

    The influence of coloration on the ecology and evolution of moving animals in groups is poorly understood. Animals in groups benefit from the "confusion effect," where predator attack success is reduced with increasing group size or density. This is thought to be due to a sensory bottleneck: an increase in the difficulty of tracking one object among many. Motion dazzle camouflage has been hypothesized to disrupt accurate perception of the trajectory or speed of an object or animal. The current study investigates the suggestion that dazzle camouflage may enhance the confusion effect. Utilizing a computer game style experiment with human predators, we found that when moving in groups, targets with stripes parallel to the targets' direction of motion interact with the confusion effect to a greater degree, and are harder to track, than those with more conventional background matching patterns. The findings represent empirical evidence that some high-contrast patterns may benefit animals in groups. The results also highlight the possibility that orientation and turning may be more relevant in the mechanisms of dazzle camouflage than previously recognized.

  9. Motion Tracking of the Carotid Artery Wall From Ultrasound Image Sequences: a Nonlinear State-Space Approach.

    PubMed

    Gao, Zhifan; Li, Yanjie; Sun, Yuanyuan; Yang, Jiayuan; Xiong, Huahua; Zhang, Heye; Liu, Xin; Wu, Wanqing; Liang, Dong; Li, Shuo

    2018-01-01

    The motion of the common carotid artery (CCA) wall has been established to be useful in early diagnosis of atherosclerotic disease. However, tracking the CCA wall motion from ultrasound images remains a challenging task. In this paper, a nonlinear state-space approach has been developed to track CCA wall motion from ultrasound sequences. In this approach, a nonlinear state-space equation with a time-variant control signal was constructed from a mathematical model of the dynamics of the CCA wall. Then, the unscented Kalman filter (UKF) was adopted to solve the nonlinear state transfer function in order to evolve the state of the target tissue, which involves estimation of the motion trajectory of the CCA wall from noisy ultrasound images. The performance of this approach has been validated on 30 simulated ultrasound sequences and a real ultrasound dataset of 103 subjects by comparing the motion tracking results obtained in this study to those of three state-of-the-art methods and of the manual tracing method performed by two experienced ultrasound physicians. The experimental results demonstrated that the proposed approach is highly correlated with (intra-class correlation coefficient ≥ 0.9948 for the longitudinal motion and ≥ 0.9966 for the radial motion) and well agrees (the 95% confidence interval width is 0.8871 mm for the longitudinal motion and 0.4159 mm for the radial motion) with the manual tracing method on real data and also exhibits high accuracy on simulated data (0.1161 ~ 0.1260 mm). These results appear to demonstrate the effectiveness of the proposed approach for motion tracking of the CCA wall.

  10. Fusion of Cross-Track TerraSAR-X PS Point Clouds over Las Vegas

    NASA Astrophysics Data System (ADS)

    Wang, Ziyun; Balz, Timo; Wei, Lianhuan; Liao, Mingsheng

    2014-11-01

    Persistent scatterer interferometry (PS-InSAR) is widely used in radar remote sensing. However, because the surface motion is estimated in the line-of-sight (LOS) direction, it is not possible to differentiate between vertical and horizontal surface motions from a single stack. Cross-track data, i.e. the combination of data from ascending and descending orbits, allows us to better analyze the deformation and to obtain 3d motion information. We implemented a cross-track fusion of PS-InSAR point cloud data, making it possible to separate the vertical and horizontal components of the surface motion.

  11. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2017-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)

  12. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.

    PubMed

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-09-07

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.

  13. Modulation of high-frequency vestibuloocular reflex during visual tracking in humans

    NASA Technical Reports Server (NTRS)

    Das, V. E.; Leigh, R. J.; Thomas, C. W.; Averbuch-Heller, L.; Zivotofsky, A. Z.; Discenna, A. O.; Dell'Osso, L. F.

    1995-01-01

    1. Humans may visually track a moving object either when they are stationary or in motion. To investigate visual-vestibular interaction during both conditions, we compared horizontal smooth pursuit (SP) and active combined eye-head tracking (CEHT) of a target moving sinusoidally at 0.4 Hz in four normal subjects while the subjects were either stationary or vibrated in yaw at 2.8 Hz. We also measured the visually enhanced vestibuloocular reflex (VVOR) during vibration in yaw at 2.8 Hz over a peak head velocity range of 5-40 degrees/s. 2. We found that the gain of the VVOR at 2.8 Hz increased in all four subjects as peak head velocity increased (P < 0.001), with minimal phase changes, such that mean retinal image slip was held below 5 degrees/s. However, no corresponding modulation in vestibuloocular reflex gain occurred with increasing peak head velocity during a control condition when subjects were rotated in darkness. 3. During both horizontal SP and CEHT, tracking gains were similar, and the mean slip speed of the target's image on the retina was held below 5.5 degrees/s whether subjects were stationary or being vibrated at 2.8 Hz. During both horizontal SP and CEHT of target motion at 0.4 Hz, while subjects were vibrated in yaw, VVOR gain for the 2.8-Hz head rotations was similar to or higher than that achieved during fixation of a stationary target. This is in contrast to the decrease of VVOR gain that is reported while stationary subjects perform CEHT.(ABSTRACT TRUNCATED AT 250 WORDS).

  14. Gaze-contingent soft tissue deformation tracking for minimally invasive robotic surgery.

    PubMed

    Mylonas, George P; Stoyanov, Danail; Deligianni, Fani; Darzi, Ara; Yang, Guang-Zhong

    2005-01-01

    The introduction of surgical robots in Minimally Invasive Surgery (MIS) has allowed enhanced manual dexterity through the use of microprocessor controlled mechanical wrists. Although fully autonomous robots are attractive, both ethical and legal barriers can prohibit their practical use in surgery. The purpose of this paper is to demonstrate that it is possible to use real-time binocular eye tracking for empowering robots with human vision by using knowledge acquired in situ. By utilizing the close relationship between the horizontal disparity and the depth perception varying with the viewing distance, it is possible to use ocular vergence for recovering 3D motion and deformation of the soft tissue during MIS procedures. Both phantom and in vivo experiments were carried out to assess the potential frequency limit of the system and its intrinsic depth recovery accuracy. The potential applications of the technique include motion stabilization and intra-operative planning in the presence of large tissue deformation.

  15. A neural-based remote eye gaze tracker under natural head motion.

    PubMed

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  16. Linearized motion estimation for articulated planes.

    PubMed

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  17. On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.

    PubMed

    Shao, Zhanpeng; Li, Youfu

    2016-02-01

    Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.

  18. Applications of artificial intelligence in safe human-robot interactions.

    PubMed

    Najmaei, Nima; Kermani, Mehrdad R

    2011-04-01

    The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.

  19. MagicPlate-512: A 2D silicon detector array for quality assurance of stereotactic motion adaptive radiotherapy.

    PubMed

    Petasecca, M; Newall, M K; Booth, J T; Duncan, M; Aldosari, A H; Fuduli, I; Espinoza, A A; Porumb, C S; Guatelli, S; Metcalfe, P; Colvill, E; Cammarano, D; Carolan, M; Oborn, B; Lerch, M L F; Perevertaylo, V; Keall, P J; Rosenfeld, A B

    2015-06-01

    Spatial and temporal resolutions are two of the most important features for quality assurance instrumentation of motion adaptive radiotherapy modalities. The goal of this work is to characterize the performance of the 2D high spatial resolution monolithic silicon diode array named "MagicPlate-512" for quality assurance of stereotactic body radiation therapy (SBRT) and stereotactic radiosurgery (SRS) combined with a dynamic multileaf collimator (MLC) tracking technique for motion compensation. MagicPlate-512 is used in combination with the movable platform HexaMotion and a research version of radiofrequency tracking system Calypso driving MLC tracking software. The authors reconstruct 2D dose distributions of small field square beams in three modalities: in static conditions, mimicking the temporal movement pattern of a lung tumor and tracking the moving target while the MLC compensates almost instantaneously for the tumor displacement. Use of Calypso in combination with MagicPlate-512 requires a proper radiofrequency interference shielding. Impact of the shielding on dosimetry has been simulated by (GEANT)4 and verified experimentally. Temporal and spatial resolutions of the dosimetry system allow also for accurate verification of segments of complex stereotactic radiotherapy plans with identification of the instant and location where a certain dose is delivered. This feature allows for retrospective temporal reconstruction of the delivery process and easy identification of error in the tracking or the multileaf collimator driving systems. A sliding MLC wedge combined with the lung motion pattern has been measured. The ability of the MagicPlate-512 (MP512) in 2D dose mapping in all three modes of operation was benchmarked by EBT3 film. Full width at half maximum and penumbra of the moving and stationary dose profiles measured by EBT3 film and MagicPlate-512 confirm that motion has a significant impact on the dose distribution. Motion, no motion, and motion with MLC tracking profiles agreed within 1 and 0.4 mm, respectively, for all field sizes tested. Use of electromagnetic tracking system generates a fluctuation of the detector baseline up to 10% of the full scale signal requiring a proper shielding strategy. MagicPlate-512 is also able to reconstruct the dose variation pulse-by-pulse in each pixel of the detector. An analysis of the dose transients with motion and motion with tracking shows that the tracking feedback algorithm used for this experiment can compensate effectively only the effect of the slower transient components. The fast changing components of the organ motion can contribute only to discrepancy of the order of 15% in penumbral region while the slower components can change the dose profile up to 75% of the expected dose. MagicPlate-512 is shown to be, potentially, a valid alternative to film or 2D ionizing chambers for quality assurance dosimetry in SRS or SBRT. Its high spatial and temporal resolutions allow for accurate reconstruction of the profile in any conditions with motion and with tracking of the motion. It shows excellent performance to reconstruct the dose deposition in real time or retrospectively as a function of time for detailed analysis of the effect of motion in a specific pixel or area of interest.

  20. Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M.

    Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less

  1. Highest Resolution In Vivo Human Brain MRI Using Prospective Motion Correction

    PubMed Central

    Stucht, Daniel; Danishad, K. Appu; Schulze, Peter; Godenschweger, Frank; Zaitsev, Maxim; Speck, Oliver

    2015-01-01

    High field MRI systems, such as 7 Tesla (T) scanners, can deliver higher signal to noise ratio (SNR) than lower field scanners and thus allow for the acquisition of data with higher spatial resolution, which is often demanded by users in the fields of clinical and neuroscientific imaging. However, high resolution scans may require long acquisition times, which in turn increase the discomfort for the subject and the risk of subject motion. Even with a cooperative and trained subject, involuntary motion due to heartbeat, swallowing, respiration and changes in muscle tone can cause image artifacts that reduce the effective resolution. In addition, scanning with higher resolution leads to increased sensitivity to even very small movements. Prospective motion correction (PMC) at 3T and 7T has proven to increase image quality in case of subject motion. Although the application of prospective motion correction is becoming more popular, previous articles focused on proof of concept studies and technical descriptions, whereas this paper briefly describes the technical aspects of the optical tracking system, marker fixation and cross calibration and focuses on the application of PMC to very high resolution imaging without intentional motion. In this study we acquired in vivo MR images at 7T using prospective motion correction during long acquisitions. As a result, we present images among the highest, if not the highest resolution of in vivo human brain MRI ever acquired. PMID:26226146

  2. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery

    PubMed Central

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-01-01

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564

  3. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.

    PubMed

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-03-26

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.

  4. Shape-and-behavior encoded tracking of bee dances.

    PubMed

    Veeraraghavan, Ashok; Chellappa, Rama; Srinivasan, Mandyam

    2008-03-01

    Behavior analysis of social insects has garnered impetus in recent years and has led to some advances in fields like control systems, flight navigation etc. Manual labeling of insect motions required for analyzing the behaviors of insects requires significant investment of time and effort. In this paper, we propose certain general principles that help in simultaneous automatic tracking and behavior analysis with applications in tracking bees and recognizing specific behaviors exhibited by them. The state space for tracking is defined using position, orientation and the current behavior of the insect being tracked. The position and orientation are parametrized using a shape model while the behavior is explicitly modeled using a three-tier hierarchical motion model. The first tier (dynamics) models the local motions exhibited and the models built in this tier act as a vocabulary for behavior modeling. The second tier is a Markov motion model built on top of the local motion vocabulary which serves as the behavior model. The third tier of the hierarchy models the switching between behaviors and this is also modeled as a Markov model. We address issues in learning the three-tier behavioral model, in discriminating between models, detecting and in modeling abnormal behaviors. Another important aspect of this work is that it leads to joint tracking and behavior analysis instead of the traditional track and then recognize approach. We apply these principles for tracking bees in a hive while they are executing the waggle dance and the round dance.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, M; Yuan, Y; Lo, Y

    Purpose: To develop a novel strategy to extract the lung tumor motion from cone beam CT (CBCT) projections by an active contour model with interpolated respiration learned from diaphragm motion. Methods: Tumor tracking on CBCT projections was accomplished with the templates derived from planning CT (pCT). There are three major steps in the proposed algorithm: 1) The pCT was modified to form two CT sets: a tumor removed pCT and a tumor only pCT, the respective digitally reconstructed radiographs DRRtr and DRRto following the same geometry of the CBCT projections were generated correspondingly. 2) The DRRtr was rigidly registered withmore » the CBCT projections on the frame-by-frame basis. Difference images between CBCT projections and the registered DRRtr were generated where the tumor visibility was appreciably enhanced. 3) An active contour method was applied to track the tumor motion on the tumor enhanced projections with DRRto as templates to initialize the tumor tracking while the respiratory motion was compensated for by interpolating the diaphragm motion estimated by our novel constrained linear regression approach. CBCT and pCT from five patients undergoing stereotactic body radiotherapy were included in addition to scans from a Quasar phantom programmed with known motion. Manual tumor tracking was performed on CBCT projections and was compared to the automatic tracking to evaluate the algorithm accuracy. Results: The phantom study showed that the error between the automatic tracking and the ground truth was within 0.2mm. For the patients the discrepancy between the calculation and the manual tracking was between 1.4 and 2.2 mm depending on the location and shape of the lung tumor. Similar patterns were observed in the frequency domain. Conclusion: The new algorithm demonstrated the feasibility to track the lung tumor from noisy CBCT projections, providing a potential solution to better motion management for lung radiation therapy.« less

  6. In vivo validation of patellofemoral kinematics during overground gait and stair ascent.

    PubMed

    Pitcairn, Samuel; Lesniak, Bryson; Anderst, William

    2018-06-18

    The patellofemoral (PF) joint is a common site for non-specific anterior knee pain. The pathophysiology of patellofemoral pain may be related to abnormal motion of the patella relative to the femur, leading to increased stress at the patellofemoral joint. Patellofemoral motion cannot be accurately measured using conventional motion capture. The aim of this study was to determine the accuracy of a biplane radiography system for measuring in vivo PF motion during walking and stair ascent. Four subjects had three 1.0 mm diameter tantalum beads implanted into the patella. Participants performed three trials each of over ground walking and stair ascent while biplane radiographs were collected at 100 Hz. Patella motion was tracked using radiostereophotogrammetric analysis (RSA) as a "gold standard", and compared to a volumetric CT model-based tracking algorithm that matched digitally reconstructed radiographs to the original biplane radiographs. The average RMS difference between the RSA and model-based tracking was 0.41 mm and 1.97° when there was no obstruction from the contralateral leg. These differences increased by 34% and 40%, respectively, when the patella was at least partially obstructed by the contralateral leg. The average RMS difference in patellofemoral joint space between tracking methods was 0.9 mm or less. Previous validations of biplane radiographic systems have estimated tracking accuracy by moving cadaveric knees through simulated motions. These validations were unable to replicate in vivo kinematics, including patella motion due to muscle activation, and failed to assess the imaging and tracking challenges related to contralateral limb obstruction. By replicating the muscle contraction, movement velocity, joint range of motion, and obstruction of the patella by the contralateral limb, the present study provides a realistic estimate of patellofemoral tracking accuracy for future in vivo studies. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Reliable motion detection of small targets in video with low signal-to-clutter ratios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, S.A.; Naylor, R.B.

    1995-07-01

    Studies show that vigilance decreases rapidly after several minutes when human operators are required to search live video for infrequent intrusion detections. Therefore, there is a need for systems which can automatically detect targets in live video and reserve the operator`s attention for assessment only. Thus far, automated systems have not simultaneously provided adequate detection sensitivity, false alarm suppression, and ease of setup when used in external, unconstrained environments. This unsatisfactory performance can be exacerbated by poor video imagery with low contrast, high noise, dynamic clutter, image misregistration, and/or the presence of small, slow, or erratically moving targets. This papermore » describes a highly adaptive video motion detection and tracking algorithm which has been developed as part of Sandia`s Advanced Exterior Sensor (AES) program. The AES is a wide-area detection and assessment system for use in unconstrained exterior security applications. The AES detection and tracking algorithm provides good performance under stressing data and environmental conditions. Features of the algorithm include: reliable detection with negligible false alarm rate of variable velocity targets having low signal-to-clutter ratios; reliable tracking of targets that exhibit motion that is non-inertial, i.e., varies in direction and velocity; automatic adaptation to both infrared and visible imagery with variable quality; and suppression of false alarms caused by sensor flaws and/or cutouts.« less

  8. A four-dimensional motion field atlas of the tongue from tagged and cine magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Prince, Jerry L.; Stone, Maureen; Wedeen, Van J.; El Fakhri, Georges; Woo, Jonghye

    2017-02-01

    Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.

  9. A Four-dimensional Motion Field Atlas of the Tongue from Tagged and Cine Magnetic Resonance Imaging.

    PubMed

    Xing, Fangxu; Prince, Jerry L; Stone, Maureen; Wedeen, Van J; Fakhri, Georges El; Woo, Jonghye

    2017-01-01

    Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.

  10. Simultaneous Tracking of Multiple Points Using a Wiimote

    NASA Astrophysics Data System (ADS)

    Skeffington, Alex; Scully, Kyle

    2012-11-01

    This paper reviews the construction of an inexpensive motion tracking and data logging system, which can be used for a wide variety of teaching experiments ranging from entry-level physics courses to advanced courses. The system utilizes an affordable infrared camera found in a Nintendo Wiimote to track IR LEDs mounted to the objects to be tracked. Two quick experiments are presented using the motion tracking system to demonstrate the diversity of tasks this system can handle. The first experiment uses the Wiimote to record the harmonic motion of oscillating masses on a near-frictionless surface, while the second experiment uses the Wiimote as part of a feedback mechanism in a rotational system. The construction, capabilities, demonstrations, and suggested improvements of the system are reported here.

  11. MotorSense: Using Motion Tracking Technology to Support the Identification and Treatment of Gross-Motor Dysfunction.

    PubMed

    Arnedillo-Sánchez, Inmaculada; Boyle, Bryan; Bossavit, Benoît

    2017-01-01

    MotorSense is a motion detection and tracking technology that can be implemented across a range of environments to assist in detecting delays in gross-motor skills development. The system utilises the motion tracking functionality of Microsoft's Kinect™. It features games that require children to perform graded gross-motor tasks matched with their chronological and developmental ages. This paper describes the rationale for MotorSense, provides an overview of the functionality of the system and illustrates sample activities.

  12. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  13. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  14. Effect of motion cues during complex curved approach and landing tasks: A piloted simulation study

    NASA Technical Reports Server (NTRS)

    Scanlon, Charles H.

    1987-01-01

    A piloted simulation study was conducted to examine the effect of motion cues using a high fidelity simulation of commercial aircraft during the performance of complex approach and landing tasks in the Microwave Landing System (MLS) signal environment. The data from these tests indicate that in a high complexity MLS approach task with moderate turbulence and wind, the pilot uses motion cues to improve path tracking performance. No significant differences in tracking accuracy were noted for the low and medium complexity tasks, regardless of the presence of motion cues. Higher control input rates were measured for all tasks when motion was used. Pilot eye scan, as measured by instrument dwell time, was faster when motion cues were used regardless of the complexity of the approach tasks. Pilot comments indicated a preference for motion. With motion cues, pilots appeared to work harder in all levels of task complexity and to improve tracking performance in the most complex approach task.

  15. [A review of progress of real-time tumor tracking radiotherapy technology based on dynamic multi-leaf collimator].

    PubMed

    Liu, Fubo; Li, Guangjun; Shen, Jiuling; Li, Ligin; Bai, Sen

    2017-02-01

    While radiation treatment to patients with tumors in thorax and abdomen is being performed, further improvement of radiation accuracy is restricted by the tumor intra-fractional motion due to respiration. Real-time tumor tracking radiation is an optimal solution to tumor intra-fractional motion. A review of the progress of real-time dynamic multi-leaf collimator(DMLC) tracking is provided in the present review, including DMLC tracking method, time lag of DMLC tracking system, and dosimetric verification.

  16. Tissue-Point Motion Tracking in the Tongue from Cine MRI and Tagged MRI

    ERIC Educational Resources Information Center

    Woo, Jonghye; Stone, Maureen; Suo, Yuanming; Murano, Emi Z.; Prince, Jerry L.

    2014-01-01

    Purpose: Accurate tissue motion tracking within the tongue can help professionals diagnose and treat vocal tract--related disorders, evaluate speech quality before and after surgery, and conduct various scientific studies. The authors compared tissue tracking results from 4 widely used deformable registration (DR) methods applied to cine magnetic…

  17. Active contour-based visual tracking by integrating colors, shapes, and motions.

    PubMed

    Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen

    2013-05-01

    In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.

  18. Brief communication: Cineradiographic analysis of the chimpanzee (Pan troglodytes) talonavicular and calcaneocuboid joints.

    PubMed

    Thompson, Nathan E; Holowka, Nicholas B; O'Neill, Matthew C; Larson, Susan G

    2014-08-01

    During terrestrial locomotion, chimpanzees exhibit dorsiflexion of the midfoot between midstance and toe-off of stance phase, a phenomenon that has been called the "midtarsal break." This motion is generally absent during human bipedalism, and in chimpanzees is associated with more mobile foot joints than in humans. However, the contribution of individual foot joints to overall foot mobility in chimpanzees is poorly understood, particularly on the medial side of the foot. The talonavicular (TN) and calcaneocuboid (CC) joints have both been suggested to contribute significantly to midfoot mobility and to the midtarsal break in chimpanzees. To evaluate the relative magnitude of motion that can occur at these joints, we tracked skeletal motion of the hindfoot and midfoot during passive plantarflexion and dorsiflexion manipulations using cineradiography. The sagittal plane range of motion was 38 ± 10° at the TN joint and 14 ± 8° at the CC joint. This finding indicates that the TN joint is more mobile than the CC joint during ankle plantarflexion-dorsiflexion. We suggest that the larger range of motion at the TN joint during dorsiflexion is associated with a rotation (inversion-eversion) across the transverse tarsal joint, which may occur in addition to sagittal plane motion. © 2014 Wiley Periodicals, Inc.

  19. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  20. Automatic techniques for 3D reconstruction of critical workplace body postures from range imaging data

    NASA Astrophysics Data System (ADS)

    Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin

    2013-11-01

    The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.

  1. Superdiffusion dominates intracellular particle motion in the supercrowded cytoplasm of pathogenic Acanthamoeba castellanii

    NASA Astrophysics Data System (ADS)

    Reverey, Julia F.; Jeon, Jae-Hyung; Bao, Han; Leippe, Matthias; Metzler, Ralf; Selhuber-Unkel, Christine

    2015-06-01

    Acanthamoebae are free-living protists and human pathogens, whose cellular functions and pathogenicity strongly depend on the transport of intracellular vesicles and granules through the cytosol. Using high-speed live cell imaging in combination with single-particle tracking analysis, we show here that the motion of endogenous intracellular particles in the size range from a few hundred nanometers to several micrometers in Acanthamoeba castellanii is strongly superdiffusive and influenced by cell locomotion, cytoskeletal elements, and myosin II. We demonstrate that cell locomotion significantly contributes to intracellular particle motion, but is clearly not the only origin of superdiffusivity. By analyzing the contribution of microtubules, actin, and myosin II motors we show that myosin II is a major driving force of intracellular motion in A. castellanii. The cytoplasm of A. castellanii is supercrowded with intracellular vesicles and granules, such that significant intracellular motion can only be achieved by actively driven motion, while purely thermally driven diffusion is negligible.

  2. SU-E-T-562: Motion Tracking Optimization for Conformal Arc Radiotherapy Plans: A QUASAR Phantom Based Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Z; Wang, I; Yao, R

    Purpose: This study is to use plan parameters optimization (Dose rate, collimator angle, couch angle, initial starting phase) to improve the performance of conformal arc radiotherapy plans with motion tracking by increasing the plan performance score (PPS). Methods: Two types of 3D conformal arc plans were created based on QUASAR respiratory motion phantom with spherical and cylindrical targets. Sinusoidal model was applied to the MLC leaves to generate motion tracking plans. A MATLAB program was developed to calculate PPS of each plan (ranges from 0–1) and optimize plan parameters. We first selected the dose rate for motion tracking plans andmore » then used simulated annealing algorithm to search for the combination of the other parameters that resulted in the plan of the maximal PPS. The optimized motion tracking plan was delivered by Varian Truebeam Linac. In-room cameras and stopwatch were used for starting phase selection and synchronization between phantom motion and plan delivery. Gaf-EBT2 dosimetry films were used to measure the dose delivered to the target in QUASAR phantom. Dose profiles and Truebeam trajectory log files were used for plan delivery performance evaluation. Results: For spherical target, the maximal PPS (PPSsph) of the optimized plan was 0.79: (Dose rate: 500MU/min, Collimator: 90°, Couch: +10°, starting phase: 0.83π). For cylindrical target, the maximal PPScyl was 0.75 (Dose rate: 300MU/min, Collimator: 87°, starting phase: 0.97π) with couch at 0°. Differences of dose profiles between motion tracking plans (with the maximal and the minimal PPS) and 3D conformal plans were as follows: PPSsph=0.79: %ΔFWHM: 8.9%, %Dmax: 3.1%; PPSsph=0.52: %ΔFWHM: 10.4%, %Dmax: 6.1%. PPScyl=0.75: %ΔFWHM: 4.7%, %Dmax: 3.6%; PPScyl=0.42: %ΔFWHM: 12.5%, %Dmax: 9.6%. Conclusion: By achieving high plan performance score through parameters optimization, we can improve target dose conformity of motion tracking plan by decreasing total MLC leaf travel distance and leaf speed.« less

  3. Significant body point labeling and tracking.

    PubMed

    Azhar, Faisal; Tjahjadi, Tardi

    2014-09-01

    In this paper, a method is presented to label and track anatomical landmarks (e.g., head, hand/arm, feet), which are referred to as significant body points (SBPs), using implicit body models. By considering the human body as an inverted pendulum model, ellipse fitting and contour moments are applied to classify it as being in Stand, Sit, or Lie posture. A convex hull of the silhouette contour is used to determine the locations of SBPs. The particle filter or a motion flow-based method is used to predict SBPs in occlusion. Stick figures of various activities are generated by connecting the SBPs. The qualitative and quantitative evaluation show that the proposed method robustly labels and tracks SBPs in various activities of two different (low and high) resolution data sets.

  4. Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking

    NASA Astrophysics Data System (ADS)

    Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.

    2017-03-01

    Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.

  5. Robust tracking of a virtual electrode on a coronary sinus catheter for atrial fibrillation ablation procedures

    NASA Astrophysics Data System (ADS)

    Wu, Wen; Chen, Terrence; Strobel, Norbert; Comaniciu, Dorin

    2012-02-01

    Catheter tracking in X-ray fluoroscopic images has become more important in interventional applications for atrial fibrillation (AF) ablation procedures. It provides real-time guidance for the physicians and can be used as reference for motion compensation applications. In this paper, we propose a novel approach to track a virtual electrode (VE), which is a non-existing electrode on the coronary sinus (CS) catheter at a more proximal location than any real electrodes. Successful tracking of the VE can provide more accurate motion information than tracking of real electrodes. To achieve VE tracking, we first model the CS catheter as a set of electrodes which are detected by our previously published learning-based approach.1 The tracked electrodes are then used to generate the hypotheses for tracking the VE. Model-based hypotheses are fused and evaluated by a Bayesian framework. Evaluation has been conducted on a database of clinical AF ablation data including challenging scenarios such as low signal-to-noise ratio (SNR), occlusion and nonrigid deformation. Our approach obtains 0.54mm median error and 90% of evaluated data have errors less than 1.67mm. The speed of our tracking algorithm reaches 6 frames-per-second on most data. Our study on motion compensation shows that using the VE as reference provides a good point to detect non-physiological catheter motion during the AF ablation procedures.2

  6. Kernelized correlation tracking with long-term motion cues

    NASA Astrophysics Data System (ADS)

    Lv, Yunqiu; Liu, Kai; Cheng, Fei

    2018-04-01

    Robust object tracking is a challenging task in computer vision due to interruptions such as deformation, fast motion and especially, occlusion of tracked object. When occlusions occur, image data will be unreliable and is insufficient for the tracker to depict the object of interest. Therefore, most trackers are prone to fail under occlusion. In this paper, an occlusion judgement and handling method based on segmentation of the target is proposed. If the target is occluded, the speed and direction of it must be different from the objects occluding it. Hence, the value of motion features are emphasized. Considering the efficiency and robustness of Kernelized Correlation Filter Tracking (KCF), it is adopted as a pre-tracker to obtain a predicted position of the target. By analyzing long-term motion cues of objects around this position, the tracked object is labelled. Hence, occlusion could be detected easily. Experimental results suggest that our tracker achieves a favorable performance and effectively handles occlusion and drifting problems.

  7. Grouping and trajectory storage in multiple object tracking: impairments due to common item motions.

    PubMed

    Suganuma, Mutsumi; Yokosawa, Kazuhiko

    2006-01-01

    In our natural viewing, we notice that objects change their locations across space and time. However, there has been relatively little consideration of the role of motion information in the construction and maintenance of object representations. We investigated this question in the context of the multiple object tracking (MOT) paradigm, wherein observers must keep track of target objects as they move randomly amid featurally identical distractors. In three experiments, we observed impairments in tracking ability when the motions of the target and distractor items shared particular properties. Specifically, we observed impairments when the target and distractor items were in a chasing relationship or moved in a uniform direction. Surprisingly, tracking ability was impaired by these manipulations even when observers failed to notice them. Our results suggest that differentiable trajectory information is an important factor in successful performance of MOT tasks. More generally, these results suggest that various types of common motion can serve as cues to form more global object representations even in the absence of other grouping cues.

  8. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters

    PubMed Central

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-01-01

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046

  9. 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.

    PubMed

    Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D

    2017-12-01

    The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.

  10. MO-FG-BRD-01: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: Introduction and KV Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, B.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  11. MO-FG-BRD-04: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, D.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  12. MO-FG-BRD-02: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MV Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berbeco, R.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  13. MO-FG-BRD-03: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: EM Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keall, P.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  14. Nonlinear Motion Tracking by Deep Learning Architecture

    NASA Astrophysics Data System (ADS)

    Verma, Arnav; Samaiya, Devesh; Gupta, Karunesh K.

    2018-03-01

    In the world of Artificial Intelligence, object motion tracking is one of the major problems. The extensive research is being carried out to track people in crowd. This paper presents a unique technique for nonlinear motion tracking in the absence of prior knowledge of nature of nonlinear path that the object being tracked may follow. We achieve this by first obtaining the centroid of the object and then using the centroid as the current example for a recurrent neural network trained using real-time recurrent learning. We have tweaked the standard algorithm slightly and have accumulated the gradient for few previous iterations instead of using just the current iteration as is the norm. We show that for a single object, such a recurrent neural network is highly capable of approximating the nonlinearity of its path.

  15. A study of the comparative effects of various means of motion cueing during a simulated compensatory tracking task

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.; Martin, D. J., Jr.

    1980-01-01

    NASA's Langley Research Center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. In the experiment, a full six-degree-of-freedom YF-16 model was used as the simulated pursuit aircraft. The Langley Visual Motion Simulator (with in-house developed wash-out), and a Langley developed g-seat were principal components of the simulation. The results of the experiment were examined utilizing univariate and multivariate techniques. The statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. Also, the analyses show that the g-seat cue helps reduce vertical error.

  16. An ice-motion tracking system at the Alaska SAR facility

    NASA Technical Reports Server (NTRS)

    Kwok, Ronald; Curlander, John C.; Pang, Shirley S.; Mcconnell, Ross

    1990-01-01

    An operational system for extracting ice-motion information from synthetic aperture radar (SAR) imagery is being developed as part of the Alaska SAR Facility. This geophysical processing system (GPS) will derive ice-motion information by automated analysis of image sequences acquired by radars on the European ERS-1, Japanese ERS-1, and Canadian RADARSAT remote sensing satellites. The algorithm consists of a novel combination of feature-based and area-based techniques for the tracking of ice floes that undergo translation and rotation between imaging passes. The system performs automatic selection of the image pairs for input to the matching routines using an ice-motion estimator. It is designed to have a daily throughput of ten image pairs. A description is given of the GPS system, including an overview of the ice-motion-tracking algorithm, the system architecture, and the ice-motion products that will be available for distribution to geophysical data users.

  17. En face projection imaging of the human choroidal layers with tracking SLO and swept source OCT angiography methods

    NASA Astrophysics Data System (ADS)

    Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.

    2015-07-01

    We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.

  18. Prospective motion correction using inductively coupled wireless RF coils.

    PubMed

    Ooi, Melvyn B; Aksoy, Murat; Maclaren, Julian; Watkins, Ronald D; Bammer, Roland

    2013-09-01

    A novel prospective motion correction technique for brain MRI is presented that uses miniature wireless radio-frequency coils, or "wireless markers," for position tracking. Each marker is free of traditional cable connections to the scanner. Instead, its signal is wirelessly linked to the MR receiver via inductive coupling with the head coil. Real-time tracking of rigid head motion is performed using a pair of glasses integrated with three wireless markers. A tracking pulse-sequence, combined with knowledge of the markers' unique geometrical arrangement, is used to measure their positions. Tracking data from the glasses is then used to prospectively update the orientation and position of the image-volume so that it follows the motion of the head. Wireless-marker position measurements were comparable to measurements using traditional wired radio-frequency tracking coils, with the standard deviation of the difference < 0.01 mm over the range of positions measured inside the head coil. Wireless-marker safety was verified with B1 maps and temperature measurements. Prospective motion correction was demonstrated in a 2D spin-echo scan while the subject performed a series of deliberate head rotations. Prospective motion correction using wireless markers enables high quality images to be acquired even during bulk motions. Wireless markers are small, avoid radio-frequency safety risks from electrical cables, are not hampered by mechanical connections to the scanner, and require minimal setup times. These advantages may help to facilitate adoption in the clinic. Copyright © 2013 Wiley Periodicals, Inc.

  19. Prospective Motion Correction using Inductively-Coupled Wireless RF Coils

    PubMed Central

    Ooi, Melvyn B.; Aksoy, Murat; Maclaren, Julian; Watkins, Ronald D.; Bammer, Roland

    2013-01-01

    Purpose A novel prospective motion correction technique for brain MRI is presented that uses miniature wireless radio-frequency (RF) coils, or “wireless markers”, for position tracking. Methods Each marker is free of traditional cable connections to the scanner. Instead, its signal is wirelessly linked to the MR receiver via inductive coupling with the head coil. Real-time tracking of rigid head motion is performed using a pair of glasses integrated with three wireless markers. A tracking pulse-sequence, combined with knowledge of the markers’ unique geometrical arrangement, is used to measure their positions. Tracking data from the glasses is then used to prospectively update the orientation and position of the image-volume so that it follows the motion of the head. Results Wireless-marker position measurements were comparable to measurements using traditional wired RF tracking coils, with the standard deviation of the difference < 0.01 mm over the range of positions measured inside the head coil. RF safety was verified with B1 maps and temperature measurements. Prospective motion correction was demonstrated in a 2D spin-echo scan while the subject performed a series of deliberate head rotations. Conclusion Prospective motion correction using wireless markers enables high quality images to be acquired even during bulk motions. Wireless markers are small, avoid RF safety risks from electrical cables, are not hampered by mechanical connections to the scanner, and require minimal setup times. These advantages may help to facilitate adoption in the clinic. PMID:23813444

  20. Radiotherapy beyond cancer: Target localization in real-time MRI and treatment planning for cardiac radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ipsen, S.; Blanck, O.; Rades, D.

    2014-12-15

    Purpose: Atrial fibrillation (AFib) is the most common cardiac arrhythmia that affects millions of patients world-wide. AFib is usually treated with minimally invasive, time consuming catheter ablation techniques. While recently noninvasive radiosurgery to the pulmonary vein antrum (PVA) in the left atrium has been proposed for AFib treatment, precise target location during treatment is challenging due to complex respiratory and cardiac motion. A MRI linear accelerator (MRI-Linac) could solve the problems of motion tracking and compensation using real-time image guidance. In this study, the authors quantified target motion ranges on cardiac magnetic resonance imaging (MRI) and analyzed the dosimetric benefitsmore » of margin reduction assuming real-time motion compensation was applied. Methods: For the imaging study, six human subjects underwent real-time cardiac MRI under free breathing. The target motion was analyzed retrospectively using a template matching algorithm. The planning study was conducted on a CT of an AFib patient with a centrally located esophagus undergoing catheter ablation, representing an ideal case for cardiac radiosurgery. The target definition was similar to the ablation lesions at the PVA created during catheter treatment. Safety margins of 0 mm (perfect tracking) to 8 mm (untracked respiratory motion) were added to the target, defining the planning target volume (PTV). For each margin, a 30 Gy single fraction IMRT plan was generated. Additionally, the influence of 1 and 3 T magnetic fields on the treatment beam delivery was simulated using Monte Carlo calculations to determine the dosimetric impact of MRI guidance for two different Linac positions. Results: Real-time cardiac MRI showed mean respiratory target motion of 10.2 mm (superior–inferior), 2.4 mm (anterior–posterior), and 2 mm (left–right). The planning study showed that increasing safety margins to encompass untracked respiratory motion leads to overlapping structures even in the ideal scenario, compromising either normal tissue dose constraints or PTV coverage. The magnetic field caused a slight increase in the PTV dose with the in-line MRI-Linac configuration. Conclusions: The authors’ results indicate that real-time tracking and motion compensation are mandatory for cardiac radiosurgery and MRI-guidance is feasible, opening the possibility of treating cardiac arrhythmia patients completely noninvasively.« less

  1. Adaptive particle filter for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai

    2009-10-01

    Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.

  2. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm.

    PubMed

    Tombu, Michael; Seiffert, Adriane E

    2011-04-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target-distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking--one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone.

  3. Human area MT+ shows load-dependent activation during working memory maintenance with continuously morphing stimulation.

    PubMed

    Galashan, Daniela; Fehr, Thorsten; Kreiter, Andreas K; Herrmann, Manfred

    2014-07-11

    Initially, human area MT+ was considered a visual area solely processing motion information but further research has shown that it is also involved in various different cognitive operations, such as working memory tasks requiring motion-related information to be maintained or cognitive tasks with implied or expected motion.In the present fMRI study in humans, we focused on MT+ modulation during working memory maintenance using a dynamic shape-tracking working memory task with no motion-related working memory content. Working memory load was systematically varied using complex and simple stimulus material and parametrically increasing retention periods. Activation patterns for the difference between retention of complex and simple memorized stimuli were examined in order to preclude that the reported effects are caused by differences in retrieval. Conjunction analysis over all delay durations for the maintenance of complex versus simple stimuli demonstrated a wide-spread activation pattern. Percent signal change (PSC) in area MT+ revealed a pattern with higher values for the maintenance of complex shapes compared to the retention of a simple circle and with higher values for increasing delay durations. The present data extend previous knowledge by demonstrating that visual area MT+ presents a brain activity pattern usually found in brain regions that are actively involved in working memory maintenance.

  4. TH-AB-202-04: Auto-Adaptive Margin Generation for MLC-Tracked Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glitzner, M; Lagendijk, J; Raaymakers, B

    Purpose: To develop an auto-adaptive margin generator for MLC tracking. The generator is able to estimate errors arising in image guided radiotherapy, particularly on an MR-Linac, which depend on the latencies of machine and image processing, as well as on patient motion characteristics. From the estimated error distribution, a segment margin is generated, able to compensate errors up to a user-defined confidence. Method: In every tracking control cycle (TCC, 40ms), the desired aperture D(t) is compared to the actual aperture A(t), a delayed and imperfect representation of D(t). Thus an error e(t)=A(T)-D(T) is measured every TCC. Applying kernel-density-estimation (KDE), themore » cumulative distribution (CDF) of e(t) is estimated. With CDF-confidence limits, upper and lower error limits are extracted for motion axes along and perpendicular leaf-travel direction and applied as margins. To test the dosimetric impact, two representative motion traces were extracted from fast liver-MRI (10Hz). The traces were applied onto a 4D-motion platform and continuously tracked by an Elekta Agility 160 MLC using an artificially imposed tracking delay. Gafchromic film was used to detect dose exposition for static, tracked, and error-compensated tracking cases. The margin generator was parameterized to cover 90% of all tracking errors. Dosimetric impact was rated by calculating the ratio between underexposed points (>5% underdosage) to the total number of points inside FWHM of static exposure. Results: Without imposing adaptive margins, tracking experiments showed a ratio of underexposed points of 17.5% and 14.3% for two motion cases with imaging delays of 200ms and 300ms, respectively. Activating the margin generated yielded total suppression (<1%) of underdosed points. Conclusion: We showed that auto-adaptive error compensation using machine error statistics is possible for MLC tracking. The error compensation margins are calculated on-line, without the need of assuming motion or machine models. Further strategies to reduce consequential overdosages are currently under investigation. This work was funded by the SoRTS consortium, which includes the industry partners Elekta, Philips and Technolution.« less

  5. MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  6. Estimation of contour motion and deformation for nonrigid object tracking

    NASA Astrophysics Data System (ADS)

    Shao, Jie; Porikli, Fatih; Chellappa, Rama

    2007-08-01

    We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.

  7. On the suitability of Elekta’s Agility 160 MLC for tracked radiation delivery: closed-loop machine performance

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; Crijns, S. P. M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.

    2015-03-01

    For motion adaptive radiotherapy, dynamic multileaf collimator tracking can be employed to reduce treatment margins by steering the beam according to the organ motion. The Elekta Agility 160 MLC has hitherto not been evaluated for its tracking suitability. Both dosimetric performance and latency are key figures and need to be assessed generically, independent of the used motion sensor. In this paper, we propose the use of harmonic functions directly fed to the MLC to determine its latency during continuous motion. Furthermore, a control variable is extracted from a camera system and fed to the MLC. Using this setup, film dosimetry and subsequent γ statistics are performed, evaluating the response when tracking (MRI)-based physiologic motion in a closed-loop. The delay attributed to the MLC itself was shown to be a minor contributor to the overall feedback chain as compared to the impact of imaging components such as MRI sequences. Delay showed a linear phase behaviour of the MLC employed in continuously dynamic applications, which enables a general MLC-characterization. Using the exemplary feedback chain, dosimetry showed a vast increase in pass rate employing γ statistics. In this early stage, the tracking performance of the Agility using the test bench yielded promising results, making the technique eligible for translation to tracking using clinical imaging modalities.

  8. CASPER: computer-aided segmentation of imperceptible motion-a learning-based tracking of an invisible needle in ultrasound.

    PubMed

    Beigi, Parmida; Rohling, Robert; Salcudean, Septimiu E; Ng, Gary C

    2017-11-01

    This paper presents a new micro-motion-based approach to track a needle in ultrasound images captured by a handheld transducer. We propose a novel learning-based framework to track a handheld needle by detecting microscale variations of motion dynamics over time. The current state of the art on using motion analysis for needle detection uses absolute motion and hence work well only when the transducer is static. We have introduced and evaluated novel spatiotemporal and spectral features, obtained from the phase image, in a self-supervised tracking framework to improve the detection accuracy in the subsequent frames using incremental training. Our proposed tracking method involves volumetric feature selection and differential flow analysis to incorporate the neighboring pixels and mitigate the effects of the subtle tremor motion of a handheld transducer. To evaluate the detection accuracy, the method is tested on porcine tissue in-vivo, during the needle insertion in the biceps femoris muscle. Experimental results show the mean, standard deviation and root-mean-square errors of [Formula: see text], [Formula: see text] and [Formula: see text] in the insertion angle, and 0.82, 1.21, 1.47 mm, in the needle tip, respectively. Compared to the appearance-based detection approaches, the proposed method is especially suitable for needles with ultrasonic characteristics that are imperceptible in the static image and to the naked eye.

  9. A simple model for studying rotation errors of gimbal mount axes in laser tracking system based on spherical mirror as a reflection unit

    NASA Astrophysics Data System (ADS)

    Song, Huixu; Shi, Zhaoyao; Chen, Hongfang; Sun, Yanqiang

    2018-01-01

    This paper presents a novel experimental approach and a simple model for verifying that spherical mirror of laser tracking system could lessen the effect of rotation errors of gimbal mount axes based on relative motion thinking. Enough material and evidence are provided to support that this simple model could replace complex optical system in laser tracking system. This experimental approach and model interchange the kinematic relationship between spherical mirror and gimbal mount axes in laser tracking system. Being fixed stably, gimbal mount axes' rotation error motions are replaced by spatial micro-displacements of spherical mirror. These motions are simulated by driving spherical mirror along the optical axis and vertical direction with the use of precision positioning platform. The effect on the laser ranging measurement accuracy of displacement caused by the rotation errors of gimbal mount axes could be recorded according to the outcome of laser interferometer. The experimental results show that laser ranging measurement error caused by the rotation errors is less than 0.1 μm if radial error motion and axial error motion are under 10 μm. The method based on relative motion thinking not only simplifies the experimental procedure but also achieves that spherical mirror owns the ability to reduce the effect of rotation errors of gimbal mount axes in laser tracking system.

  10. A Tool for the Automated Collection of Space Utilization Data: Three Dimensional Space Utilization Monitor

    NASA Technical Reports Server (NTRS)

    Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.

    2015-01-01

    Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP), in collaboration with the Behavioral Health and Performance (BHP) Element, is conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within that volume. NASA is looking for innovative methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods for collecting such data exist yet many are obtrusive and require significant post-processing. Example technologies used in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multiple camera filmography. However due to constraints of space operations many such methods are infeasible, such as inertial tracking systems which typically rely upon a gravity vector to normalize sensor readings, and traditional IR systems which are large and require extensive calibration. However multiple technologies have not yet been applied to space operations for these explicit purposes. Two of these include 3-Dimensional Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) and depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR).

  11. Synthesized multi-station tribo-test system for bio-tribological evaluation in vitro

    NASA Astrophysics Data System (ADS)

    Wu, Tonghai; Du, Ying; Li, Yang; Wang, Shuo; Zhang, Zhinan

    2016-07-01

    Tribological tests play an important role on the evaluation of long-term bio-tribological performances of prosthetic materials for commercial fabrication. Those tests focus on the motion simulation of a real joint in vitro with only normal loads and constant velocities, which are far from the real friction behavior of human joints characterized with variable loads and multiple directions. In order to accurately obtain the bio-tribological performances of artificial joint materials, a tribological tester with a miniature four-station tribological system is proposed with four distinctive features. Firstly, comparability and repeatability of a test are ensured by four equal stations of the tester. Secondly, cross-linked scratch between tribo-pairs of human joints can be simulated by using a gear-rack meshing mechanism to produce composite motions. With this mechanism, the friction tracks can be designed by varying reciprocating and rotating speeds. Thirdly, variable loading system is realized by using a ball-screw mechanism driven by a stepper motor, by which loads under different gaits during walking are simulated. Fourthly, dynamic friction force and normal load can be measured simultaneously. The verifications of the performances of the developed tester show that the variable frictional tracks can produce different wear debris compared with one-directional tracks, and the accuracy of loading and friction force is within ±5%. Thus the high consistency among different stations can be obtained. Practically, the proposed tester system could provide more comprehensive and accurate bio-tribological evaluations for prosthetic materials.

  12. Real-time 3D motion tracking for small animal brain PET

    NASA Astrophysics Data System (ADS)

    Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.

    2008-05-01

    High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.

  13. Rapid, topology-based particle tracking for high-resolution measurements of large complex 3D motion fields.

    PubMed

    Patel, Mohak; Leggett, Susan E; Landauer, Alexander K; Wong, Ian Y; Franck, Christian

    2018-04-03

    Spatiotemporal tracking of tracer particles or objects of interest can reveal localized behaviors in biological and physical systems. However, existing tracking algorithms are most effective for relatively low numbers of particles that undergo displacements smaller than their typical interparticle separation distance. Here, we demonstrate a single particle tracking algorithm to reconstruct large complex motion fields with large particle numbers, orders of magnitude larger than previously tractably resolvable, thus opening the door for attaining very high Nyquist spatial frequency motion recovery in the images. Our key innovations are feature vectors that encode nearest neighbor positions, a rigorous outlier removal scheme, and an iterative deformation warping scheme. We test this technique for its accuracy and computational efficacy using synthetically and experimentally generated 3D particle images, including non-affine deformation fields in soft materials, complex fluid flows, and cell-generated deformations. We augment this algorithm with additional particle information (e.g., color, size, or shape) to further enhance tracking accuracy for high gradient and large displacement fields. These applications demonstrate that this versatile technique can rapidly track unprecedented numbers of particles to resolve large and complex motion fields in 2D and 3D images, particularly when spatial correlations exist.

  14. Analysis multi-agent with precense of the leader

    NASA Astrophysics Data System (ADS)

    Achmadi, Sentot; Marjono, Miswanto

    2017-12-01

    The phenomenon of swarm is a natural phenomenon that is often done by a collection of living things in the form of motion from one place to another. By clustering, a group of animals can increase their effectiveness in food search and avoid predators. A group of geese also performs a swarm phenomenon when flying and forms an inverted V-formation with one of the geese acting as a leader. Each flying track of members of the geese group always follows the leader's path at a certain distance. This article discusses the mathematical modeling of the swarm phenomenon, which is the optimal tracking control for multi-agent model with the influence of the leader in the 2-dimensional space. The leader in this model is intended to track the specified path. Firstly, the leader's motion control is to follow the predetermined path using the Tracking Error Dynamic method. Then, the path from the leader is used to design the motion control of each agent to track the leader's path at a certain distance. The result of numerical simulation shows that the leader trajectory can track the specified path. Similarly, the motion of each agent can trace and follow the leader's path.

  15. Real-time tracking of respiratory-induced tumor motion by dose-rate regulation

    NASA Astrophysics Data System (ADS)

    Han-Oh, Yeonju Sarah

    We have developed a novel real-time tumor-tracking technology, called Dose-Rate-Regulated Tracking (DRRT), to compensate for tumor motion caused by breathing. Unlike other previously proposed tumor-tracking methods, this new method uses a preprogrammed dynamic multileaf collimator (MLC) sequence in combination with real-time dose-rate control. This new scheme circumvents the technical challenge in MLC-based tumor tracking, that is to control the MLC motion in real time, based on real-time detected tumor motion. The preprogrammed MLC sequence describes the movement of the tumor, as a function of breathing phase, amplitude, or tidal volume. The irregularity of tumor motion during treatment is handled by real-time regulation of the dose rate, which effectively speeds up or slows down the delivery of radiation as needed. This method is based on the fact that all of the parameters in dynamic radiation delivery, including MLC motion, are enslaved to the cumulative dose, which, in turn, can be accelerated or decelerated by varying the dose rate. Because commercially available MLC systems do not allow the MLC delivery sequence to be modified in real time based on the patient's breathing signal, previously proposed tumor-tracking techniques using a MLC cannot be readily implemented in the clinic today. By using a preprogrammed MLC sequence to handle the required motion, the task for real-time control is greatly simplified. We have developed and tested the pre- programmed MLC sequence and the dose-rate regulation algorithm using lung-cancer patients breathing signals. It has been shown that DRRT can track the tumor with an accuracy of less than 2 mm for a latency of the DRRT system of less than 0.35 s. We also have evaluated the usefulness of guided breathing for DRRT. Since DRRT by its very nature can compensate for breathing-period changes, guided breathing was shown to be unnecessary for real-time tracking when using DRRT. Finally, DRRT uses the existing dose-rate control system that is provided for current linear accelerators. Therefore, DRRT can be achieved with minimal modification of existing technology, and this can shorten substantially the time necessary to establish DRRT in clinical practice.

  16. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less

  17. An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity

    PubMed Central

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities–0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking. PMID:28337137

  18. Design and implementation of a MRI compatible and dynamic phantom simulating the motion of a tumor in the liver under the breathing cycle

    NASA Astrophysics Data System (ADS)

    Geelhand de Merxem, Arnould; Lechien, Vianney; Thibault, Tanguy; Dasnoy, Damien; Macq, Benoît

    2017-11-01

    In the context of cancer treatment by proton therapy, research is carried out on the use magnetic resonance imaging (MRI) to perform real-time tracking of tumors during irradiation. The purpose of this combination is to reduce the irradiation of healthy tissues surrounding the tumor, while using a non-ionizing imaging method. Therefore, it is necessary to validate the tracking algorithms on real-time MRI sequences by using physical simulators, i.e. a phantom. Our phantom is a device representing a liver with hepatocellular carcinoma, a stomach and a pancreas close to the anatomy and the magnetic properties of the human body, animated by a motion similar to the one induced by the respiration. Many anatomical or mobile phantoms already exist, but the purpose here is to combine a reliable representation of the abdominal organs with the creation and the evaluation of a programmable movement in the same device, which makes it unique. The phantom is composed of surrogate organs made of CAGN gels. These organs are placed in a transparent box filled with water and attached to an elastic membrane. A programmable electro-pneumatic system creates a movement, similarly to a human diaphragm, by inflating and deflating the membrane. The average relaxation times of the synthetic organs belongs to a range corresponding to the human organs values (T1 = [458.7-1660] ms, T2 = [39.3-89.1] ms). The displacement of the tumor is tracked in real time by a camera inside the MRI. The amplitude of the movement varies from 12.8 to 20.1 mm for a periodic and repeatable movement. Irregular breath patterns can be created with a maximum amplitude of 40 mm.

  19. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  20. Simulation of Human-induced Vibrations Based on the Characterized In-field Pedestrian Behavior

    PubMed Central

    Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter

    2016-01-01

    For slender and lightweight structures, vibration serviceability is a matter of growing concern, often constituting the critical design requirement. With designs governed by the dynamic performance under human-induced loads, a strong demand exists for the verification and refinement of currently available load models. The present contribution uses a 3D inertial motion tracking technique for the characterization of the in-field pedestrian behavior. The technique is first tested in laboratory experiments with simultaneous registration of the corresponding ground reaction forces. The experiments include walking persons as well as rhythmical human activities such as jumping and bobbing. It is shown that the registered motion allows for the identification of the time variant pacing rate of the activity. Together with the weight of the person and the application of generalized force models available in literature, the identified time-variant pacing rate allows to characterize the human-induced loads. In addition, time synchronization among the wireless motion trackers allows identifying the synchronization rate among the participants. Subsequently, the technique is used on a real footbridge where both the motion of the persons and the induced structural vibrations are registered. It is shown how the characterized in-field pedestrian behavior can be applied to simulate the induced structural response. It is demonstrated that the in situ identified pacing rate and synchronization rate constitute an essential input for the simulation and verification of the human-induced loads. The main potential applications of the proposed methodology are the estimation of human-structure interaction phenomena and the development of suitable models for the correlation among pedestrians in real traffic conditions. PMID:27167309

  1. Identification of human-generated forces on wheelchairs during total-body extensor thrusts.

    PubMed

    Hong, Seong-Wook; Patrangenaru, Vlad; Singhose, William; Sprigle, Stephen

    2006-10-01

    Involuntary extensor thrust experienced by wheelchair users with neurological disorders may cause injuries via impact with the wheelchair, lead to the occupant sliding out of the seat, and also damage the wheelchair. The concept of a dynamic seat, which allows movement of a seat with respect to the wheelchair frame, has been suggested as a potential solution to provide greater freedom and safety. Knowledge of the human-generated motion and forces during unconstrained extensor thrust events is of great importance in developing more comfortable and effective dynamic seats. The objective of this study was to develop a method to identify human-generated motions and forces during extensor thrust events. This information can be used to design the triggering system for a dynamic seat. An experimental system was developed to automatically track the motions of the wheelchair user using a video camera and also measure the forces at the footrest. An inverse dynamic approach was employed along with a three-link human body model and the experimental data to predict the human-generated forces. Two kinds of experiments were performed: the first experiment validated the proposed model and the second experiment showed the effects of the extensor thrust speed, the footrest angle, and the seatback angle. The proposed method was tested using a sensitivity analysis, from which a performance index was deduced to help indicate the robust region of the force identification. A system to determine human-generated motions and forces during unconstrained extensor thrusts was developed. Through experiments and simulations, the effectiveness and reliability of the developed system was established.

  2. Effects of some motion sickness suppressants on tracking performance during angular accelerations.

    DOT National Transportation Integrated Search

    1982-10-01

    The two studies reported here examined the influence of three established antimotion sickness drugs on tracking performance in static (stationary) and dynamic (angular acceleration) conditions and on visual fixation ability during motion. : In Study ...

  3. Correction for human head motion in helical x-ray CT

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Sun, T.; Alcheikh, A. R.; Kuncic, Z.; Nuyts, J.; Fulton, R.

    2016-02-01

    Correction for rigid object motion in helical CT can be achieved by reconstructing from a modified source-detector orbit, determined by the object motion during the scan. This ensures that all projections are consistent, but it does not guarantee that the projections are complete in the sense of being sufficient for exact reconstruction. We have previously shown with phantom measurements that motion-corrected helical CT scans can suffer from data-insufficiency, in particular for severe motions and at high pitch. To study whether such data-insufficiency artefacts could also affect the motion-corrected CT images of patients undergoing head CT scans, we used an optical motion tracking system to record the head movements of 10 healthy volunteers while they executed each of the 4 different types of motion (‘no’, slight, moderate and severe) for 60 s. From these data we simulated 354 motion-affected CT scans of a voxelized human head phantom and reconstructed them with and without motion correction. For each simulation, motion-corrected (MC) images were compared with the motion-free reference, by visual inspection and with quantitative similarity metrics. Motion correction improved similarity metrics in all simulations. Of the 270 simulations performed with moderate or less motion, only 2 resulted in visible residual artefacts in the MC images. The maximum range of motion in these simulations would encompass that encountered in the vast majority of clinical scans. With severe motion, residual artefacts were observed in about 60% of the simulations. We also evaluated a new method of mapping local data sufficiency based on the degree to which Tuy’s condition is locally satisfied, and observed that areas with high Tuy values corresponded to the locations of residual artefacts in the MC images. We conclude that our method can provide accurate and artefact-free MC images with most types of head motion likely to be encountered in CT imaging, provided that the motion can be accurately determined.

  4. Single-particle tracking of quantum dot-conjugated prion proteins inside yeast cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsuji, Toshikazu; Kawai-Noma, Shigeko; Pack, Chan-Gi

    2011-02-25

    Research highlights: {yields} We develop a method to track a quantum dot-conjugated protein in yeast cells. {yields} We incorporate the conjugated quantum dot proteins into yeast spheroplasts. {yields} We track the motions by conventional or 3D tracking microscopy. -- Abstract: Yeast is a model eukaryote with a variety of biological resources. Here we developed a method to track a quantum dot (QD)-conjugated protein in the budding yeast Saccharomyces cerevisiae. We chemically conjugated QDs with the yeast prion Sup35, incorporated them into yeast spheroplasts, and tracked the motions by conventional two-dimensional or three-dimensional tracking microscopy. The method paves the way towardmore » the individual tracking of proteins of interest inside living yeast cells.« less

  5. Markerless motion estimation for motion-compensated clinical brain imaging

    NASA Astrophysics Data System (ADS)

    Kyme, Andre Z.; Se, Stephen; Meikle, Steven R.; Fulton, Roger R.

    2018-05-01

    Motion-compensated brain imaging can dramatically reduce the artifacts and quantitative degradation associated with voluntary and involuntary subject head motion during positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT). However, motion-compensated imaging protocols are not in widespread clinical use for these modalities. A key reason for this seems to be the lack of a practical motion tracking technology that allows for smooth and reliable integration of motion-compensated imaging protocols in the clinical setting. We seek to address this problem by investigating the feasibility of a highly versatile optical motion tracking method for PET, SPECT and CT geometries. The method requires no attached markers, relying exclusively on the detection and matching of distinctive facial features. We studied the accuracy of this method in 16 volunteers in a mock imaging scenario by comparing the estimated motion with an accurate marker-based method used in applications such as image guided surgery. A range of techniques to optimize performance of the method were also studied. Our results show that the markerless motion tracking method is highly accurate (<2 mm discrepancy against a benchmarking system) on an ethnically diverse range of subjects and, moreover, exhibits lower jitter and estimation of motion over a greater range than some marker-based methods. Our optimization tests indicate that the basic pose estimation algorithm is very robust but generally benefits from rudimentary background masking. Further marginal gains in accuracy can be achieved by accounting for non-rigid motion of features. Efficiency gains can be achieved by capping the number of features used for pose estimation provided that these features adequately sample the range of head motion encountered in the study. These proof-of-principle data suggest that markerless motion tracking is amenable to motion-compensated brain imaging and holds good promise for a practical implementation in clinical PET, SPECT and CT systems.

  6. Particle Tracking Facilitates Real Time Capable Motion Correction in 2D or 3D Two-Photon Imaging of Neuronal Activity.

    PubMed

    Aghayee, Samira; Winkowski, Daniel E; Bowen, Zachary; Marshall, Erin E; Harrington, Matt J; Kanold, Patrick O; Losert, Wolfgang

    2017-01-01

    The application of 2-photon laser scanning microscopy (TPLSM) techniques to measure the dynamics of cellular calcium signals in populations of neurons is an extremely powerful technique for characterizing neural activity within the central nervous system. The use of TPLSM on awake and behaving subjects promises new insights into how neural circuit elements cooperatively interact to form sensory perceptions and generate behavior. A major challenge in imaging such preparations is unavoidable animal and tissue movement, which leads to shifts in the imaging location (jitter). The presence of image motion can lead to artifacts, especially since quantification of TPLSM images involves analysis of fluctuations in fluorescence intensities for each neuron, determined from small regions of interest (ROIs). Here, we validate a new motion correction approach to compensate for motion of TPLSM images in the superficial layers of auditory cortex of awake mice. We use a nominally uniform fluorescent signal as a secondary signal to complement the dynamic signals from genetically encoded calcium indicators. We tested motion correction for single plane time lapse imaging as well as multiplane (i.e., volume) time lapse imaging of cortical tissue. Our procedure of motion correction relies on locating the brightest neurons and tracking their positions over time using established techniques of particle finding and tracking. We show that our tracking based approach provides subpixel resolution without compromising speed. Unlike most established methods, our algorithm also captures deformations of the field of view and thus can compensate e.g., for rotations. Object tracking based motion correction thus offers an alternative approach for motion correction, one that is well suited for real time spike inference analysis and feedback control, and for correcting for tissue distortions.

  7. Particle Tracking Facilitates Real Time Capable Motion Correction in 2D or 3D Two-Photon Imaging of Neuronal Activity

    PubMed Central

    Aghayee, Samira; Winkowski, Daniel E.; Bowen, Zachary; Marshall, Erin E.; Harrington, Matt J.; Kanold, Patrick O.; Losert, Wolfgang

    2017-01-01

    The application of 2-photon laser scanning microscopy (TPLSM) techniques to measure the dynamics of cellular calcium signals in populations of neurons is an extremely powerful technique for characterizing neural activity within the central nervous system. The use of TPLSM on awake and behaving subjects promises new insights into how neural circuit elements cooperatively interact to form sensory perceptions and generate behavior. A major challenge in imaging such preparations is unavoidable animal and tissue movement, which leads to shifts in the imaging location (jitter). The presence of image motion can lead to artifacts, especially since quantification of TPLSM images involves analysis of fluctuations in fluorescence intensities for each neuron, determined from small regions of interest (ROIs). Here, we validate a new motion correction approach to compensate for motion of TPLSM images in the superficial layers of auditory cortex of awake mice. We use a nominally uniform fluorescent signal as a secondary signal to complement the dynamic signals from genetically encoded calcium indicators. We tested motion correction for single plane time lapse imaging as well as multiplane (i.e., volume) time lapse imaging of cortical tissue. Our procedure of motion correction relies on locating the brightest neurons and tracking their positions over time using established techniques of particle finding and tracking. We show that our tracking based approach provides subpixel resolution without compromising speed. Unlike most established methods, our algorithm also captures deformations of the field of view and thus can compensate e.g., for rotations. Object tracking based motion correction thus offers an alternative approach for motion correction, one that is well suited for real time spike inference analysis and feedback control, and for correcting for tissue distortions. PMID:28860973

  8. The performance of matched-field track-before-detect methods using shallow-water Pacific data.

    PubMed

    Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem

    2002-07-01

    Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.

  9. Graphs and Tracks Revisited

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang; Belloni, Mario

    2013-04-01

    We have recently developed a Graphs and Tracks model based on an earlier program by David Trowbridge, as shown in Fig. 1. Our model can show position, velocity, acceleration, and energy graphs and can be used for motion-to-graphs exercises. Users set the heights of the track segments, and the model displays the motion of the ball on the track together with position, velocity, and acceleration graphs. This ready-to-run model is available in the ComPADRE OSP Collection at www.compadre.org/osp/items/detail.cfm?ID=12023.

  10. Semi-automatic tracking, smoothing and segmentation of hyoid bone motion from videofluoroscopic swallowing study.

    PubMed

    Kim, Won-Seok; Zeng, Pengcheng; Shi, Jian Qing; Lee, Youngjo; Paik, Nam-Jong

    2017-01-01

    Motion analysis of the hyoid bone via videofluoroscopic study has been used in clinical research, but the classical manual tracking method is generally labor intensive and time consuming. Although some automatic tracking methods have been developed, masked points could not be tracked and smoothing and segmentation, which are necessary for functional motion analysis prior to registration, were not provided by the previous software. We developed software to track the hyoid bone motion semi-automatically. It works even in the situation where the hyoid bone is masked by the mandible and has been validated in dysphagia patients with stroke. In addition, we added the function of semi-automatic smoothing and segmentation. A total of 30 patients' data were used to develop the software, and data collected from 17 patients were used for validation, of which the trajectories of 8 patients were partly masked. Pearson correlation coefficients between the manual and automatic tracking are high and statistically significant (0.942 to 0.991, P-value<0.0001). Relative errors between automatic tracking and manual tracking in terms of the x-axis, y-axis and 2D range of hyoid bone excursion range from 3.3% to 9.2%. We also developed an automatic method to segment each hyoid bone trajectory into four phases (elevation phase, anterior movement phase, descending phase and returning phase). The semi-automatic hyoid bone tracking from VFSS data by our software is valid compared to the conventional manual tracking method. In addition, the ability of automatic indication to switch the automatic mode to manual mode in extreme cases and calibration without attaching the radiopaque object is convenient and useful for users. Semi-automatic smoothing and segmentation provide further information for functional motion analysis which is beneficial to further statistical analysis such as functional classification and prognostication for dysphagia. Therefore, this software could provide the researchers in the field of dysphagia with a convenient, useful, and all-in-one platform for analyzing the hyoid bone motion. Further development of our method to track the other swallowing related structures or objects such as epiglottis and bolus and to carry out the 2D curve registration may be needed for a more comprehensive functional data analysis for dysphagia with big data.

  11. Tracking features in retinal images of adaptive optics confocal scanning laser ophthalmoscope using KLT-SIFT algorithm

    PubMed Central

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2010-01-01

    With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443

  12. LAGEOS geodetic analysis-SL7.1

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Kolenkiewicz, R.; Dunn, P. J.; Klosko, S. M.; Robbins, J. W.; Torrence, M. H.; Williamson, R. G.; Pavlis, E. C.; Douglas, N. B.; Fricke, S. K.

    1991-01-01

    Laser ranging measurements to the LAGEOS satellite from 1976 through 1989 are related via geodetic and orbital theories to a variety of geodetic and geodynamic parameters. The SL7.1 analyses are explained of this data set including the estimation process for geodetic parameters such as Earth's gravitational constant (GM), those describing the Earth's elasticity properties (Love numbers), and the temporally varying geodetic parameters such as Earth's orientation (polar motion and Delta UT1) and tracking site horizontal tectonic motions. Descriptions of the reference systems, tectonic models, and adopted geodetic constants are provided; these are the framework within which the SL7.1 solution takes place. Estimates of temporal variations in non-conservative force parameters are included in these SL7.1 analyses as well as parameters describing the orbital states at monthly epochs. This information is useful in further refining models used to describe close-Earth satellite behavior. Estimates of intersite motions and individual tracking site motions computed through the network adjustment scheme are given. Tabulations of tracking site eccentricities, data summaries, estimated monthly orbital and force model parameters, polar motion, Earth rotation, and tracking station coordinate results are also provided.

  13. Motion-compensated speckle tracking via particle filtering

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu

    2015-07-01

    Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.

  14. Orbital and angular motion construction for low thrust interplanetary flight

    NASA Astrophysics Data System (ADS)

    Yelnikov, R. V.; Mashtakov, Y. V.; Ovchinnikov, M. Yu.; Tkachev, S. S.

    2016-11-01

    Low thrust interplanetary flight is considered. Firstly, the fuel-optimal control is found. Then the angular motion is synthesized. This motion provides the thruster tracking of the required by optimal control direction. And, finally, reaction wheel control law for tracking this angular motion is proposed and implemented. The numerical example is given and total operation time for thrusters is found. Disturbances from solar pressure, thrust eccentricity, inaccuracy of reaction wheels installation and errors of inertia tensor are taken into account.

  15. The optimization of self-phased arrays for diurnal motion tracking of synchronous satellites

    NASA Technical Reports Server (NTRS)

    Theobold, D. M.; Hodge, D. B.

    1977-01-01

    The diurnal motion of a synchronous satellite necessitates mechanical tracking when a large aperture, high gain antenna is employed at the earth terminal. An alternative solution to this tracking problem is to use a self phased array consisting of a number of fixed pointed elements, each with moderate directivity. Non-mechanical tracking and adequate directive gain are achieved electronically by phase coherent summing of the element outputs. The element beamwidths provide overlapping area coverage of the satellite motion but introduce a diurnal variation into the array gain. The optimum element beamwidth and pointing direction of these elements can be obtained under the condition that the array gain is maximized simultaneously with the minimization of the diurnal variation.

  16. Fitting Handled Objects into Apertures by 17- to 36-Month-Old Children: The Dynamics of Spatial Coordination

    ERIC Educational Resources Information Center

    Jung, Wendy P.; Kahrs, Björn A.; Lockman, Jeffrey J.

    2018-01-01

    Handled artifacts are ubiquitous in human technology, but how young children engage in spatially coordinated behaviors with these artifacts is not well understood. To address this issue, children (N = 30) from 17-36 months were studied with motion tracking technology as they fit the distal segment of a handled artifact into a slot. The handle was…

  17. Reconstructing 3-D skin surface motion for the DIET breast cancer screening system.

    PubMed

    Botterill, Tom; Lotz, Thomas; Kashif, Amer; Chase, J Geoffrey

    2014-05-01

    Digital image-based elasto-tomography (DIET) is a prototype system for breast cancer screening. A breast is imaged while being vibrated, and the observed surface motion is used to infer the internal stiffness of the breast, hence identifying tumors. This paper describes a computer vision system for accurately measuring 3-D surface motion. A model-based segmentation is used to identify the profile of the breast in each image, and the 3-D surface is reconstructed by fitting a model to the profiles. The surface motion is measured using a modern optical flow implementation customized to the application, then trajectories of points on the 3-D surface are given by fusing the optical flow with the reconstructed surfaces. On data from human trials, the system is shown to exceed the performance of an earlier marker-based system at tracking skin surface motion. We demonstrate that the system can detect a 10 mm tumor in a silicone phantom breast.

  18. MRI-guided tumor tracking in lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Cerviño, Laura I.; Du, Jiang; Jiang, Steve B.

    2011-07-01

    Precise tracking of lung tumor motion during treatment delivery still represents a challenge in radiation therapy. Prototypes of MRI-linac hybrid systems are being created which have the potential of ionization-free real-time imaging of the tumor. This study evaluates the performance of lung tumor tracking algorithms in cine-MRI sagittal images from five healthy volunteers. Visible vascular structures were used as targets. Volunteers performed several series of regular and irregular breathing. Two tracking algorithms were implemented and evaluated: a template matching (TM) algorithm in combination with surrogate tracking using the diaphragm (surrogate was used when the maximum correlation between the template and the image in the search window was less than specified), and an artificial neural network (ANN) model based on the principal components of a region of interest that encompasses the target motion. The mean tracking error ē and the error at 95% confidence level e95 were evaluated for each model. The ANN model led to ē = 1.5 mm and e95 = 4.2 mm, while TM led to ē = 0.6 mm and e95 = 1.0 mm. An extra series was considered separately to evaluate the benefit of using surrogate tracking in combination with TM when target out-of-plane motion occurs. For this series, the mean error was 7.2 mm using only TM and 1.7 mm when the surrogate was used in combination with TM. Results show that, as opposed to tracking with other imaging modalities, ANN does not perform well in MR-guided tracking. TM, however, leads to highly accurate tracking. Out-of-plane motion could be addressed by surrogate tracking using the diaphragm, which can be easily identified in the images.

  19. Prototype development of an electrical impedance based simultaneous respiratory and cardiac monitoring system for gated radiotherapy.

    PubMed

    Kohli, Kirpal; Liu, Jeff; Schellenberg, Devin; Karvat, Anand; Parameswaran, Ash; Grewal, Parvind; Thomas, Steven

    2014-10-14

    In radiotherapy, temporary translocations of the internal organs and tumor induced by respiratory and cardiac activities can undesirably lead to significantly lower radiation dose on the targeted tumor but more harmful radiation on surrounding healthy tissues. Respiratory and cardiac gated radiotherapy offers a potential solution for the treatment of tumors located in the upper thorax. The present study focuses on the design and development of simultaneous acquisition of respiratory and cardiac signal using electrical impedance technology for use in dual gated radiotherapy. An electronic circuitry was developed for monitoring the bio-impedance change due to respiratory and cardiac motions and extracting the cardiogenic ECG signal. The system was analyzed in terms of reliability of signal acquisition, time delay, and functionality in a high energy radiation environment. The resulting signal of the system developed was also compared with the output of the commercially available Real-time Position Management™ (RPM) system in both time and frequency domains. The results demonstrate that the bioimpedance-based method can potentially provide reliable tracking of respiratory and cardiac motion in humans, alternative to currently available methods. When compared with the RPM system, the impedance-based system developed in the present study shows similar output pattern but different sensitivities in monitoring different respiratory rates. The tracking of cardiac motion was more susceptible to interference from other sources than respiratory motion but also provided synchronous output compared with the ECG signal extracted. The proposed hardware-based implementation was observed to have a worst-case time delay of approximately 33 ms for respiratory monitoring and 45 ms for cardiac monitoring. No significant effect on the functionality of the system was observed when it was tested in a radiation environment with the electrode lead wires directly exposed to high-energy X-Rays. The developed system capable of rendering quality signals for tracking both respiratory and cardiac motions can potentially provide a solution for simultaneous dual-gated radiotherapy.

  20. Anomaly detection driven active learning for identifying suspicious tracks and events in WAMI video

    NASA Astrophysics Data System (ADS)

    Miller, David J.; Natraj, Aditya; Hockenbury, Ryler; Dunn, Katherine; Sheffler, Michael; Sullivan, Kevin

    2012-06-01

    We describe a comprehensive system for learning to identify suspicious vehicle tracks from wide-area motion (WAMI) video. First, since the road network for the scene of interest is assumed unknown, agglomerative hierarchical clustering is applied to all spatial vehicle measurements, resulting in spatial cells that largely capture individual road segments. Next, for each track, both at the cell (speed, acceleration, azimuth) and track (range, total distance, duration) levels, extreme value feature statistics are both computed and aggregated, to form summary (p-value based) anomaly statistics for each track. Here, to fairly evaluate tracks that travel across different numbers of spatial cells, for each cell-level feature type, a single (most extreme) statistic is chosen, over all cells traveled. Finally, a novel active learning paradigm, applied to a (logistic regression) track classifier, is invoked to learn to distinguish suspicious from merely anomalous tracks, starting from anomaly-ranked track prioritization, with ground-truth labeling by a human operator. This system has been applied to WAMI video data (ARGUS), with the tracks automatically extracted by a system developed in-house at Toyon Research Corporation. Our system gives promising preliminary results in highly ranking as suspicious aerial vehicles, dismounts, and traffic violators, and in learning which features are most indicative of suspicious tracks.

  1. Technical Note: Validation and implementation of a wireless transponder tracking system for gated stereotactic ablative radiotherapy of the liver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Joshua, E-mail: joshua.james@louisville.edu; Dunlap, Neal E.; Nguyen, Vi Nhan

    Purpose: Tracking soft-tissue targets has recently been cleared as a new application of Calypso, an electromagnetic wireless transponder tracking system, allowing for gated treatment of the liver based on the motion of the target volume itself. The purpose of this study is to describe the details of validating the Calypso system for wireless transponder tracking of the liver and to present the clinical workflow for using it to deliver gated stereotactic ablative radiotherapy (SABR). Methods: A commercial 3D diode array motion system was used to evaluate the dynamic tracking accuracy of Calypso when tracking continuous large amplitude motion. It wasmore » then used to perform end-to-end tests to evaluate the dosimetric accuracy of gated beam delivery for liver SABR. In addition, gating limits were investigated to determine how large the gating window can be while still maintaining dosimetric accuracy. The gating latency of the Calypso system was also measured using a customized motion phantom. Results: The average absolute difference between the measured and expected positional offset was 0.3 mm. The 2%/2 mm gamma pass rates for the gated treatment delivery were greater than 97%. When increasing the gating limits beyond the known extent of planned motion, the gamma pass rates decreased as expected. The 2%/2 mm gamma pass rate for a 1, 2, and 3 mm increase in gating limits was measured to be 97.8%, 82.9%, and 61.4%, respectively. The average gating latency was measured to be 63.8 ms for beam-hold and 195.8 ms for beam-on. Four liver patients with 17 total fractions have been successfully treated at our institution. Conclusions: Wireless transponder tracking was validated as a dosimetrically accurate way to provide gated SABR of the liver. The dynamic tracking accuracy of the Calypso system met manufacturer’s specification, even for continuous large amplitude motion that can be encountered when tracking liver tumors close to the diaphragm. The measured beam-hold gating latency was appropriate for targets that will traverse the gating limit each respiratory cycle causing the beam to be interrupted constantly throughout treatment delivery.« less

  2. Approach for gait analysis in persons with limb loss including residuum and prosthesis socket dynamics.

    PubMed

    LaPrè, A K; Price, M A; Wedge, R D; Umberger, B R; Sup, Frank C

    2018-04-01

    Musculoskeletal modeling and marker-based motion capture techniques are commonly used to quantify the motions of body segments, and the forces acting on them during human gait. However, when these techniques are applied to analyze the gait of people with lower limb loss, the clinically relevant interaction between the residual limb and prosthesis socket is typically overlooked. It is known that there is considerable motion and loading at the residuum-socket interface, yet traditional gait analysis techniques do not account for these factors due to the inability to place tracking markers on the residual limb inside of the socket. In the present work, we used a global optimization technique and anatomical constraints to estimate the motion and loading at the residuum-socket interface as part of standard gait analysis procedures. We systematically evaluated a range of parameters related to the residuum-socket interface, such as the number of degrees of freedom, and determined the configuration that yields the best compromise between faithfully tracking experimental marker positions while yielding anatomically realistic residuum-socket kinematics and loads that agree with data from the literature. Application of the present model to gait analysis for people with lower limb loss will deepen our understanding of the biomechanics of walking with a prosthesis, which should facilitate the development of enhanced rehabilitation protocols and improved assistive devices. Copyright © 2017 John Wiley & Sons, Ltd.

  3. The effect of visual-motion time delays on pilot performance in a pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1976-01-01

    A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.

  4. Activity-based exploitation of Full Motion Video (FMV)

    NASA Astrophysics Data System (ADS)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  5. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm

    PubMed Central

    Tombu, Michael

    2014-01-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target–distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking—one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone. PMID:21264704

  6. Motion control of the rabbit ankle joint with a flat interface nerve electrode.

    PubMed

    Park, Hyun-Joo; Durand, Dominique M

    2015-12-01

    A flat interface nerve electrode (FINE) has been shown to improve fascicular and subfascicular selectivity. A recently developed novel control algorithm for FINE was applied to motion control of the rabbit ankle. A 14-contact FINE was placed on the rabbit sciatic nerve (n = 8), and ankle joint motion was controlled for sinusoidal trajectories and filtered random trajectories. To this end, a real-time controller was implemented with a multiple-channel current stimulus isolator. The performance test results showed good tracking performance of rabbit ankle joint motion for filtered random trajectories and sinusoidal trajectories (0.5 Hz and 1.0 Hz) with <10% average root-mean-square (RMS) tracking error, whereas the average range of ankle joint motion was between -20.0 ± 9.3° and 18.1 ± 8.8°. The proposed control algorithm enables the use of a multiple-contact nerve electrode for motion trajectory tracking control of musculoskeletal systems. © 2015 Wiley Periodicals, Inc.

  7. Data fusion for target tracking and classification with wireless sensor network

    NASA Astrophysics Data System (ADS)

    Pannetier, Benjamin; Doumerc, Robin; Moras, Julien; Dezert, Jean; Canevet, Loic

    2016-10-01

    In this paper, we address the problem of multiple ground target tracking and classification with information obtained from a unattended wireless sensor network. A multiple target tracking (MTT) algorithm, taking into account road and vegetation information, is proposed based on a centralized architecture. One of the key issue is how to adapt classical MTT approach to satisfy embedded processing. Based on track statistics, the classification algorithm uses estimated location, velocity and acceleration to help to classify targets. The algorithms enables tracking human and vehicles driving both on and off road. We integrate road or trail width and vegetation cover, as constraints in target motion models to improve performance of tracking under constraint with classification fusion. Our algorithm also presents different dynamic models, to palliate the maneuvers of targets. The tracking and classification algorithms are integrated into an operational platform (the fusion node). In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).

  8. PROMO – Real-time Prospective Motion Correction in MRI using Image-based Tracking

    PubMed Central

    White, Nathan; Roddey, Cooper; Shankaranarayanan, Ajit; Han, Eric; Rettmann, Dan; Santos, Juan; Kuperman, Josh; Dale, Anders

    2010-01-01

    Artifacts caused by patient motion during scanning remain a serious problem in most MRI applications. The prospective motion correction technique attempts to address this problem at its source by keeping the measurement coordinate system fixed with respect to the patient throughout the entire scan process. In this study, a new image-based approach for prospective motion correction is described, which utilizes three orthogonal 2D spiral navigator acquisitions (SP-Navs) along with a flexible image-based tracking method based on the Extended Kalman Filter (EKF) algorithm for online motion measurement. The SP-Nav/EKF framework offers the advantages of image-domain tracking within patient-specific regions-of-interest and reduced sensitivity to off-resonance-induced corruption of rigid-body motion estimates. The performance of the method was tested using offline computer simulations and online in vivo head motion experiments. In vivo validation results covering a broad range of staged head motions indicate a steady-state error of the SP-Nav/EKF motion estimates of less than 10 % of the motion magnitude, even for large compound motions that included rotations over 15 degrees. A preliminary in vivo application in 3D inversion recovery spoiled gradient echo (IR-SPGR) and 3D fast spin echo (FSE) sequences demonstrates the effectiveness of the SP-Nav/EKF framework for correcting 3D rigid-body head motion artifacts prospectively in high-resolution 3D MRI scans. PMID:20027635

  9. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  10. 76 FR 3881 - Notice of Intent To Grant Exclusive Patent License; PNI Corporation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-21

    ... and Apparatus for Motion Tracking of an Articulated Rigid Body, Navy Case No. 82,816.//U.S. Patent No. 7,089,148: Method and Apparatus for Motion Tracking of an Articulated Rigid Body, Navy Case No. 96...

  11. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  12. A coarse-to-fine kernel matching approach for mean-shift based visual tracking

    NASA Astrophysics Data System (ADS)

    Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.

    2009-03-01

    Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.

  13. Physiological Motion Axis for the Seat of a Dynamic Office Chair.

    PubMed

    Kuster, Roman Peter; Bauer, Christoph Markus; Oetiker, Sarah; Kool, Jan

    2016-09-01

    The aim of this study was to determine and verify the optimal location of the motion axis (MA) for the seat of a dynamic office chair. A dynamic seat that supports pelvic motion may improve physical well-being and decrease the risk of sitting-associated disorders. However, office work requires an undisturbed view on the work task, which means a stable position of the upper trunk and head. Current dynamic office chairs do not fulfill this need. Consequently, a dynamic seat was adapted to the physiological kinematics of the human spine. Three-dimensional motion tracking in free sitting helped determine the physiological MA of the spine in the frontal plane. Three dynamic seats with physiological, lower, and higher MA were compared in stable upper body posture (thorax inclination) and seat support of pelvic motion (dynamic fitting accuracy). Spinal kinematics during sitting and walking were compared. The physiological MA was at the level of the 11th thoracic vertebra, causing minimal thorax inclination and high dynamic fitting accuracy. Spinal motion in active sitting and walking was similar. The physiological MA of the seat allows considerable lateral flexion of the spine similar to walking with a stable upper body posture and a high seat support of pelvic motion. The physiological MA enables lateral flexion of the spine, similar to walking, without affecting stable upper body posture, thus allowing active sitting while focusing on work. © 2016, Human Factors and Ergonomics Society.

  14. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes

    NASA Astrophysics Data System (ADS)

    Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang

    2018-02-01

    This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.

  15. Cine phase contrast MRI to measure continuum Lagrangian finite strain fields in contracting skeletal muscle.

    PubMed

    Zhou, Hehe; Novotny, John E

    2007-01-01

    To measure the complex mechanics and Lagrangian finite strain of contracting human skeletal muscle in vivo with cine phase contrast MRI (CPC-MRI) applied to the human supraspinatus muscle of the shoulder. Processing techniques are applied to transform velocities from CPC-MRI images to displacements and planar Lagrangian finite strain. An interpolation method describing the continuity of the velocity field and forward-backward and Fourier transform methods were used to track the displacement of regions of interest during a cyclic abduction motion of a subject's arm. The components of the Lagrangian strain tensor were derived during the motion and principal and maximum in-plane shear strain fields calculated. Derived displacement and strain fields are shown that describe the contraction mechanics of the supraspinatus. Strains vary over time during the cyclic motion and are highly nonuniform throughout the muscle. This method presented overcomes the physical resolution of the MRI scanner, which is crucial for the detection of detailed information within muscles, such as the changes that might occur with partial tears of the supraspinatus. These can then be used as input or validation data for modeling human skeletal muscle.

  16. Ubiquitous Wireless Smart Sensing and Control

    NASA Technical Reports Server (NTRS)

    Wagner, Raymond

    2013-01-01

    Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools). Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.

  17. Ubiquitous Wireless Smart Sensing and Control. Pumps and Pipes JSC: Uniquely Houston

    NASA Technical Reports Server (NTRS)

    Wagner, Raymond

    2013-01-01

    Need new technologies to reliably and safely have humans interact within sensored environments (integrated user interfaces, physical and cognitive augmentation, training, and human-systems integration tools).Areas of focus include: radio frequency identification (RFID), motion tracking, wireless communication, wearable computing, adaptive training and decision support systems, and tele-operations. The challenge is developing effective, low cost/mass/volume/power integrated monitoring systems to assess and control system, environmental, and operator health; and accurately determining and controlling the physical, chemical, and biological environments of the areas and associated environmental control systems.

  18. A method to track rotational motion for use in single-molecule biophysics.

    PubMed

    Lipfert, Jan; Kerssemakers, Jacob J W; Rojer, Maylon; Dekker, Nynke H

    2011-10-01

    The double helical nature of DNA links many cellular processes such as DNA replication, transcription, and repair to rotational motion and the accumulation of torsional strain. Magnetic tweezers (MTs) are a single-molecule technique that enables the application of precisely calibrated stretching forces to nucleic acid tethers and to control their rotational motion. However, conventional magnetic tweezers do not directly monitor rotation or measure torque. Here, we describe a method to directly measure rotational motion of particles in MT. The method relies on attaching small, non-magnetic beads to the magnetic beads to act as fiducial markers for rotational tracking. CCD images of the beads are analyzed with a tracking algorithm specifically designed to minimize crosstalk between translational and rotational motion: first, the in-plane center position of the magnetic bead is determined with a kernel-based tracker, while subsequently the height and rotation angle of the bead are determined via correlation-based algorithms. Evaluation of the tracking algorithm using both simulated images and recorded images of surface-immobilized beads demonstrates a rotational resolution of 0.1°, while maintaining a translational resolution of 1-2 nm. Example traces of the rotational fluctuations exhibited by DNA-tethered beads confined in magnetic potentials of varying stiffness demonstrate the robustness of the method and the potential for simultaneous tracking of multiple beads. Our rotation tracking algorithm enables the extension of MTs to magnetic torque tweezers (MTT) to directly measure the torque in single molecules. In addition, we envision uses of the algorithm in a range of biophysical measurements, including further extensions of MT, tethered particle motion, and optical trapping measurements.

  19. Feasibility Study for Markerless Tracking of Lung Tumors in Stereotactic Body Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richter, Anne, E-mail: richter_a3@klinik.uni-wuerzburg.d; Wilbert, Juergen; Baier, Kurt

    2010-10-01

    Purpose: To evaluate the feasibility and accuracy of a method for markerless tracking of lung tumors in electronic portal imaging device (EPID) movies and to analyze intra- and interfractional variations in tumor motion. Methods and Materials: EPID movies were acquired during stereotactic body radiotherapy (SBRT) given to 40 patients with 49 pulmonary targets and retrospectively analyzed. Tumor visibility and tracking accuracy were determined by three observers. Tumor motion of 30 targets was analyzed in detail via four-dimensional computed tomography (4DCT) and EPID in the superior-inferior direction for intra- and interfractional variations. Results: Tumor visibility was sufficient for markerless tracking inmore » 47% of the EPID movies. Tumor size and visibility in the DRR were correlated with visibility in the EPID images. The difference between automatic and manual tracking was a maximum of 2 mm for 98.3% in the x direction and 89.4% in the y direction. Motion amplitudes in 4DCT images (range, 0.7-17.9 mm; median, 4.9 mm) were closely correlated with amplitudes in the EPID movies. Intrafractional and interfractional variability of tumor motion amplitude were of similar magnitude: 1 mm on average to a maximum of 4 mm. A change in moving average of more than {+-}1 mm, {+-}2 mm, and {+-}4 mm were observed in 47.1%, 17.1%, and 4.5% of treatment time for all trajectories, respectively. Mean tumor velocity was 3.4 mm/sec, to a maximum 61 mm/sec. Conclusions: Tracking of pulmonary tumors in EPID images without implanted markers was feasible in 47% of all treatment beams. 4DCT is representative of the evaluation of mean breathing motion on average, but larger deviations occurred in target motion between treatment planning and delivery effort a monitoring during delivery.« less

  20. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  1. Quantitative Characterization of Cell Behaviors through Cell Cycle Progression via Automated Cell Tracking

    PubMed Central

    Wang, Yuliang; Jeong, Younkoo; Jhiang, Sissy M.; Yu, Lianbo; Menq, Chia-Hsiang

    2014-01-01

    Cell behaviors are reflections of intracellular tension dynamics and play important roles in many cellular processes. In this study, temporal variations in cell geometry and cell motion through cell cycle progression were quantitatively characterized via automated cell tracking for MCF-10A non-transformed breast cells, MCF-7 non-invasive breast cancer cells, and MDA-MB-231 highly metastatic breast cancer cells. A new cell segmentation method, which combines the threshold method and our modified edge based active contour method, was applied to optimize cell boundary detection for all cells in the field-of-view. An automated cell-tracking program was implemented to conduct live cell tracking over 40 hours for the three cell lines. The cell boundary and location information was measured and aligned with cell cycle progression with constructed cell lineage trees. Cell behaviors were studied in terms of cell geometry and cell motion. For cell geometry, cell area and cell axis ratio were investigated. For cell motion, instantaneous migration speed, cell motion type, as well as cell motion range were analyzed. We applied a cell-based approach that allows us to examine and compare temporal variations of cell behavior along with cell cycle progression at a single cell level. Cell body geometry along with distribution of peripheral protrusion structures appears to be associated with cell motion features. Migration speed together with motion type and motion ranges are required to distinguish the three cell-lines examined. We found that cells dividing or overlapping vertically are unique features of cell malignancy for both MCF-7 and MDA-MB-231 cells, whereas abrupt changes in cell body geometry and cell motion during mitosis are unique to highly metastatic MDA-MB-231 cells. Taken together, our live cell tracking system serves as an invaluable tool to identify cell behaviors that are unique to malignant and/or highly metastatic breast cancer cells. PMID:24911281

  2. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J.

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison withmore » normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking performance. A future study will investigate spatial uniformity of motion and its effect on the motion estimation errors.« less

  3. Improved Shear Wave Motion Detection Using Pulse-Inversion Harmonic Imaging with a Phased Array Transducer

    PubMed Central

    Song, Pengfei; Zhao, Heng; Urban, Matthew W.; Manduca, Armando; Pislaru, Sorin V.; Kinnick, Randall R.; Pislaru, Cristina; Greenleaf, James F.; Chen, Shigao

    2013-01-01

    Ultrasound tissue harmonic imaging is widely used to improve ultrasound B-mode imaging quality thanks to its effectiveness in suppressing imaging artifacts associated with ultrasound reverberation, phase aberration, and clutter noise. In ultrasound shear wave elastography (SWE), because the shear wave motion signal is extracted from the ultrasound signal, these noise sources can significantly deteriorate the shear wave motion tracking process and consequently result in noisy and biased shear wave motion detection. This situation is exacerbated in in vivo SWE applications such as heart, liver, and kidney. This paper, therefore, investigated the possibility of implementing harmonic imaging, specifically pulse-inversion harmonic imaging, in shear wave tracking, with the hypothesis that harmonic imaging can improve shear wave motion detection based on the same principles that apply to general harmonic B-mode imaging. We first designed an experiment with a gelatin phantom covered by an excised piece of pork belly and show that harmonic imaging can significantly improve shear wave motion detection by producing less underestimated shear wave motion and more consistent shear wave speed measurements than fundamental imaging. Then, a transthoracic heart experiment on a freshly sacrificed pig showed that harmonic imaging could robustly track the shear wave motion and give consistent shear wave speed measurements while fundamental imaging could not. Finally, an in vivo transthoracic study of seven healthy volunteers showed that the proposed harmonic imaging tracking sequence could provide consistent estimates of the left ventricular myocardium stiffness in end-diastole with a general success rate of 80% and a success rate of 93.3% when excluding the subject with Body Mass Index (BMI) higher than 25. These promising results indicate that pulse-inversion harmonic imaging can significantly improve shear wave motion tracking and thus potentially facilitate more robust assessment of tissue elasticity by SWE. PMID:24021638

  4. TH-AB-202-11: Spatial and Rotational Quality Assurance of 6DOF Patient Tracking Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belcher, AH; Liu, X; Grelewicz, Z

    2016-06-15

    Purpose: External tracking systems used for patient positioning and motion monitoring during radiotherapy are now capable of detecting both translations and rotations (6DOF). In this work, we develop a novel technique to evaluate the 6DOF performance of external motion tracking systems. We apply this methodology to an infrared (IR) marker tracking system and two 3D optical surface mapping systems in a common tumor 6DOF workspace. Methods: An in-house designed and built 6DOF parallel kinematics robotic motion phantom was used to follow input trajectories with sub-millimeter and sub-degree accuracy. The 6DOF positions of the robotic system were then tracked and recordedmore » independently by three optical camera systems. A calibration methodology which associates the motion phantom and camera coordinate frames was first employed, followed by a comprehensive 6DOF trajectory evaluation, which spanned a full range of positions and orientations in a 20×20×16 mm and 5×5×5 degree workspace. The intended input motions were compared to the calibrated 6DOF measured points. Results: The technique found the accuracy of the IR marker tracking system to have maximal root mean square error (RMSE) values of 0.25 mm translationally and 0.09 degrees rotationally, in any one axis, comparing intended 6DOF positions to positions measured by the IR camera. The 6DOF RSME discrepancy for the first 3D optical surface tracking unit yielded maximal values of 0.60 mm and 0.11 degrees over the same 6DOF volume. An earlier generation 3D optical surface tracker was observed to have worse tracking capabilities than both the IR camera unit and the newer 3D surface tracking system with maximal RMSE of 0.74 mm and 0.28 degrees within the same 6DOF evaluation space. Conclusion: The proposed technique was effective at evaluating the performance of 6DOF patient tracking systems. All systems examined exhibited tracking capabilities at the sub-millimeter and sub-degree level within a 6DOF workspace.« less

  5. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brix, Lau, E-mail: lau.brix@stab.rm.dk; Ringgaard, Steffen; Sørensen, Thomas Sangild

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (ormore » tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal, and coronal 2D MRI series yielded 3D respiratory motion curves for all volunteers. The motion directionality and amplitude were very similar when measured directly as in-plane motion or estimated indirectly as through-plane motion. The mean peak-to-peak breathing amplitude was 1.6 mm (left-right), 11.0 mm (craniocaudal), and 2.5 mm (anterior-posterior). The position of the watermelon structure was estimated in 2D MRI images with a root-mean-square error of 0.52 mm (in-plane) and 0.87 mm (through-plane). Conclusions: A method for 3D tracking in 2D MRI series was developed and demonstrated for liver tracking in volunteers. The method would allow real-time 3D localization with integrated MR-Linac systems.« less

  6. SIFT-based dense pixel tracking on 0.35 T cine-MR images acquired during image-guided radiation therapy with application to gating optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazur, Thomas R., E-mail: tmazur@radonc.wustl.edu, E-mail: hli@radonc.wustl.edu; Fischer-Valuck, Benjamin W.; Wang, Yuhe

    Purpose: To first demonstrate the viability of applying an image processing technique for tracking regions on low-contrast cine-MR images acquired during image-guided radiation therapy, and then outline a scheme that uses tracking data for optimizing gating results in a patient-specific manner. Methods: A first-generation MR-IGRT system—treating patients since January 2014—integrates a 0.35 T MR scanner into an annular gantry consisting of three independent Co-60 sources. Obtaining adequate frame rates for capturing relevant patient motion across large fields-of-view currently requires coarse in-plane spatial resolution. This study initially (1) investigate the feasibility of rapidly tracking dense pixel correspondences across single, sagittal planemore » images (with both moderate signal-to-noise and spatial resolution) using a matching objective for highly descriptive vectors called scale-invariant feature transform (SIFT) descriptors associated to all pixels that describe intensity gradients in local regions around each pixel. To more accurately track features, (2) harmonic analysis was then applied to all pixel trajectories within a region-of-interest across a short training period. In particular, the procedure adjusts the motion of outlying trajectories whose relative spectral power within a frequency bandwidth consistent with respiration (or another form of periodic motion) does not exceed a threshold value that is manually specified following the training period. To evaluate the tracking reliability after applying this correction, conventional metrics—including Dice similarity coefficients (DSCs), mean tracking errors (MTEs), and Hausdorff distances (HD)—were used to compare target segmentations obtained via tracking to manually delineated segmentations. Upon confirming the viability of this descriptor-based procedure for reliably tracking features, the study (3) outlines a scheme for optimizing gating parameters—including relative target position and a tolerable margin about this position—derived from a probability density function that is constructed using tracking results obtained just prior to treatment. Results: The feasibility of applying the matching objective for SIFT descriptors toward pixel-by-pixel tracking on cine-MR acquisitions was first retrospectively demonstrated for 19 treatments (spanning various sites). Both with and without motion correction based on harmonic analysis, sub-pixel MTEs were obtained. A mean DSC value spanning all patients of 0.916 ± 0.001 was obtained without motion correction, with DSC values exceeding 0.85 for all patients considered. While most patients show accurate tracking without motion correction, harmonic analysis does yield substantial gain in accuracy (defined using HDs) for three particularly challenging subjects. An application of tracking toward a gating optimization procedure was then demonstrated that should allow a physician to balance beam-on time and tissue sparing in a patient-specific manner by tuning several intuitive parameters. Conclusions: Tracking results show high fidelity in assessing intrafractional motion observed on cine-MR acquisitions. Incorporating harmonic analysis during a training period improves the robustness of the tracking for challenging targets. The concomitant gating optimization procedure should allow for physicians to quantitatively assess gating effectiveness quickly just prior to treatment in a patient-specific manner.« less

  7. Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO

    PubMed Central

    Braaf, Boy; Vienola, Kari V.; Sheehy, Christy K.; Yang, Qiang; Vermeer, Koenraad A.; Tiruveedhula, Pavan; Arathorn, David W.; Roorda, Austin; de Boer, Johannes F.

    2012-01-01

    In phase-resolved OCT angiography blood flow is detected from phase changes in between A-scans that are obtained from the same location. In ophthalmology, this technique is vulnerable to eye motion. We address this problem by combining inter-B-scan phase-resolved OCT angiography with real-time eye tracking. A tracking scanning laser ophthalmoscope (TSLO) at 840 nm provided eye tracking functionality and was combined with a phase-stabilized optical frequency domain imaging (OFDI) system at 1040 nm. Real-time eye tracking corrected eye drift and prevented discontinuity artifacts from (micro)saccadic eye motion in OCT angiograms. This improved the OCT spot stability on the retina and consequently reduced the phase-noise, thereby enabling the detection of slower blood flows by extending the inter-B-scan time interval. In addition, eye tracking enabled the easy compounding of multiple data sets from the fovea of a healthy volunteer to create high-quality eye motion artifact-free angiograms. High-quality images are presented of two distinct layers of vasculature in the retina and the dense vasculature of the choroid. Additionally we present, for the first time, a phase-resolved OCT angiogram of the mesh-like network of the choriocapillaris containing typical pore openings. PMID:23304647

  8. Spatial and rotational quality assurance of 6DOF patient tracking systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belcher, Andrew H.; Liu, Xinmin; Grelewicz, Zachary

    Purpose: External tracking systems used for patient positioning and motion monitoring during radiotherapy are now capable of detecting both translations and rotations. In this work, the authors develop a novel technique to evaluate the 6 degree of freedom 6(DOF) (translations and rotations) performance of external motion tracking systems. The authors apply this methodology to an infrared marker tracking system and two 3D optical surface mapping systems in a common tumor 6DOF workspace. Methods: An in-house designed and built 6DOF parallel kinematics robotic motion phantom was used to perform motions with sub-millimeter and subdegree accuracy in a 6DOF workspace. An infraredmore » marker tracking system was first used to validate a calibration algorithm which associates the motion phantom coordinate frame to the camera frame. The 6DOF positions of the mobile robotic system in this space were then tracked and recorded independently by an optical surface tracking system after a cranial phantom was rigidly fixed to the moveable platform of the robotic stage. The calibration methodology was first employed, followed by a comprehensive 6DOF trajectory evaluation, which spanned a full range of positions and orientations in a 20 × 20 × 16 mm and 5° × 5° × 5° workspace. The intended input motions were compared to the calibrated 6DOF measured points. Results: The technique found the accuracy of the infrared (IR) marker tracking system to have maximal root-mean square error (RMSE) values of 0.18, 0.25, 0.07 mm, 0.05°, 0.05°, and 0.09° in left–right (LR), superior–inferior (SI), anterior–posterior (AP), pitch, roll, and yaw, respectively, comparing the intended 6DOF position and the measured position by the IR camera. Similarly, the 6DOF RSME discrepancy for the HD optical surface tracker yielded maximal values of 0.46, 0.60, 0.54 mm, 0.06°, 0.11°, and 0.08° in LR, SI, AP, pitch, roll, and yaw, respectively, over the same 6DOF evaluative workspace. An earlier generation 3D optical surface tracking unit was observed to have worse tracking capabilities than both the IR camera unit and the newer 3D surface tracking system with maximal RMSE of 0.69, 0.74, 0.47 mm, 0.28°, 0.19°, and 0.18°, in LR, SI, AP, pitch, roll, and yaw, respectively, in the same 6DOF evaluation space. Conclusions: The proposed technique was found to be effective at evaluating the performance of 6DOF patient tracking systems. All observed optical tracking systems were found to exhibit tracking capabilities at the sub-millimeter and subdegree level within a 6DOF workspace.« less

  9. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Continuous Quantitative Measurements on a Linear Air Track

    ERIC Educational Resources Information Center

    Vogel, Eric

    1973-01-01

    Describes the construction and operational procedures of a spark-timing apparatus which is designed to record the back and forth motion of one or two carts on linear air tracks. Applications to measurements of velocity, acceleration, simple harmonic motion, and collision problems are illustrated. (CC)

  11. Application of an automatic cloud tracking technique to Meteosat water vapor and infrared observations

    NASA Technical Reports Server (NTRS)

    Endlich, R. M.; Wolf, D. E.

    1980-01-01

    The automatic cloud tracking system was applied to METEOSAT 6.7 micrometers water vapor measurements to learn whether the system can track the motions of water vapor patterns. Data for the midlatitudes, subtropics, and tropics were selected from a sequence of METEOSAT pictures for 25 April 1978. Trackable features in the water vapor patterns were identified using a clustering technique and the features were tracked by two different methods. In flat (low contrast) water vapor fields, the automatic motion computations were not reliable, but in areas where the water vapor fields contained small scale structure (such as in the vicinity of active weather phenomena) the computations were successful. Cloud motions were computed using METEOSAT infrared observations (including tropical convective systems and midlatitude jet stream cirrus).

  12. Development of a liver respiratory motion simulator to investigate magnetic tracking for abdominal interventions

    NASA Astrophysics Data System (ADS)

    Cleary, Kevin R.; Banovac, Filip; Levy, Elliot; Tanaka, Daigo

    2002-05-01

    We have designed and constructed a liver respiratory motion simulator as a first step in demonstrating the feasibility of using a new magnetic tracking system to follow the movement of internal organs. The simulator consists of a dummy torso, a synthetic liver, a linear motion platform, a graphical user interface for image overlay, and a magnetic tracking system along with magnetically tracked instruments. While optical tracking systems are commonly used in commercial image-guided surgery systems for the brain and spine, they are limited to procedures in which a line of sight can be maintained between the tracking system and the instruments which are being tracked. Magnetic tracking systems have been proposed for image-guided surgery applications, but most currently available magnetically tracked sensors are too small to be embedded in the body. The magnetic tracking system employed here, the AURORA from Northern Digital, can use sensors as small as 0.9 mm in diameter by 8 mm in length. This makes it possible to embed these sensors in catheters and thin needles. The catheters can then be wedged in a vein in an internal organ of interest so that tracking the position of the catheter gives a good estimate of the position of the internal organ. Alternatively, a needle with an embedded sensor could be placed near the area of interest.

  13. Atrioventricular junction (AVJ) motion tracking: a software tool with ITK/VTK/Qt.

    PubMed

    Pengdong Xiao; Shuang Leng; Xiaodan Zhao; Hua Zou; Ru San Tan; Wong, Philip; Liang Zhong

    2016-08-01

    The quantitative measurement of the Atrioventricular Junction (AVJ) motion is an important index for ventricular functions of one cardiac cycle including systole and diastole. In this paper, a software tool that can conduct AVJ motion tracking from cardiovascular magnetic resonance (CMR) images is presented by using Insight Segmentation and Registration Toolkit (ITK), The Visualization Toolkit (VTK) and Qt. The software tool is written in C++ by using Visual Studio Community 2013 integrated development environment (IDE) containing both an editor and a Microsoft complier. The software package has been successfully implemented. From the software engineering practice, it is concluded that ITK, VTK, and Qt are very handy software systems to implement automatic image analysis functions for CMR images such as quantitative measure of motion by visual tracking.

  14. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, S; Rao, A; Wendt, R

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less

  15. A comparison of gantry-mounted x-ray-based real-time target tracking methods.

    PubMed

    Montanaro, Tim; Nguyen, Doan Trang; Keall, Paul J; Booth, Jeremy; Caillet, Vincent; Eade, Thomas; Haddad, Carol; Shieh, Chun-Chien

    2018-03-01

    Most modern radiotherapy machines are built with a 2D kV imaging system. Combining this imaging system with a 2D-3D inference method would allow for a ready-made option for real-time 3D tumor tracking. This work investigates and compares the accuracy of four existing 2D-3D inference methods using both motion traces inferred from external surrogates and measured internally from implanted beacons. Tumor motion data from 160 fractions (46 thoracic/abdominal patients) of Synchrony traces (inferred traces), and 28 fractions (7 lung patients) of Calypso traces (internal traces) from the LIGHT SABR trial (NCT02514512) were used in this study. The motion traces were used as the ground truth. The ground truth trajectories were used in silico to generate 2D positions projected on the kV detector. These 2D traces were then passed to the 2D-3D inference methods: interdimensional correlation, Gaussian probability density function (PDF), arbitrary-shape PDF, and the Kalman filter. The inferred 3D positions were compared with the ground truth to determine tracking errors. The relationships between tracking error and motion magnitude, interdimensional correlation, and breathing periodicity index (BPI) were also investigated. Larger tracking errors were observed from the Calypso traces, with RMS and 95th percentile 3D errors of 0.84-1.25 mm and 1.72-2.64 mm, compared to 0.45-0.68 mm and 0.74-1.13 mm from the Synchrony traces. The Gaussian PDF method was found to be the most accurate, followed by the Kalman filter, the interdimensional correlation method, and the arbitrary-shape PDF method. Tracking error was found to strongly and positively correlate with motion magnitude for both the Synchrony and Calypso traces and for all four methods. Interdimensional correlation and BPI were found to negatively correlate with tracking error only for the Synchrony traces. The Synchrony traces exhibited higher interdimensional correlation than the Calypso traces especially in the anterior-posterior direction. Inferred traces often exhibit higher interdimensional correlation, which are not true representation of thoracic/abdominal motion and may underestimate kV-based tracking errors. The use of internal traces acquired from systems such as Calypso is advised for future kV-based tracking studies. The Gaussian PDF method is the most accurate 2D-3D inference method for tracking thoracic/abdominal targets. Motion magnitude has significant impact on 2D-3D inference error, and should be considered when estimating kV-based tracking error. © 2018 American Association of Physicists in Medicine.

  16. Image-based tracking: a new emerging standard

    NASA Astrophysics Data System (ADS)

    Antonisse, Jim; Randall, Scott

    2012-06-01

    Automated moving object detection and tracking are increasingly viewed as solutions to the enormous data volumes resulting from emerging wide-area persistent surveillance systems. In a previous paper we described a Motion Imagery Standards Board (MISB) initiative to help address this problem: the specification of a micro-architecture for the automatic extraction of motion indicators and tracks. This paper reports on the development of an extended specification of the plug-and-play tracking micro-architecture, on its status as an emerging standard across DoD, the Intelligence Community, and NATO.

  17. 3D Tracking of Diatom Motion in Turbulent Flow

    NASA Astrophysics Data System (ADS)

    Variano, E. A.; Brandt, L.; Sardina, G.; Ardekani, M.; Pujara, N.; Ayers, S.; Du Clos, K.; Karp-Boss, L.; Jumars, P. A.

    2016-02-01

    We present laboratory measurements of single-celled and chain forming diatom motion in a stirred turbulence tank. The overarching goal is to explore whether diatoms track flow with fidelity (passive tracers) or whether interactions with cell density and shape result in biased trajectories that alter settling velocities. Diatom trajectories are recorded in 3D using a stereoscopic, calibrated tracking tool. Turbulence is created in a novel stirred tank, designed to create motions that match those found in the ocean surface mixed layer at scales less than 10 cm. The data are analyzed for evidence of enhanced particle clustering, an indicator of turbulently altered settling rates

  18. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  19. Robust Sliding Mode Control Based on GA Optimization and CMAC Compensation for Lower Limb Exoskeleton

    PubMed Central

    Long, Yi; Du, Zhi-jiang; Wang, Wei-dong; Dong, Wei

    2016-01-01

    A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353

  20. Integrating optical finger motion tracking with surface touch events.

    PubMed

    MacRitchie, Jennifer; McPherson, Andrew P

    2015-01-01

    This paper presents a method of integrating two contrasting sensor systems for studying human interaction with a mechanical system, using piano performance as the case study. Piano technique requires both precise small-scale motion of fingers on the key surfaces and planned large-scale movement of the hands and arms. Where studies of performance often focus on one of these scales in isolation, this paper investigates the relationship between them. Two sensor systems were installed on an acoustic grand piano: a monocular high-speed camera tracking the position of painted markers on the hands, and capacitive touch sensors attach to the key surfaces which measure the location of finger-key contacts. This paper highlights a method of fusing the data from these systems, including temporal and spatial alignment, segmentation into notes and automatic fingering annotation. Three case studies demonstrate the utility of the multi-sensor data: analysis of finger flexion or extension based on touch and camera marker location, timing analysis of finger-key contact preceding and following key presses, and characterization of individual finger movements in the transitions between successive key presses. Piano performance is the focus of this paper, but the sensor method could equally apply to other fine motor control scenarios, with applications to human-computer interaction.

  1. Integrating optical finger motion tracking with surface touch events

    PubMed Central

    MacRitchie, Jennifer; McPherson, Andrew P.

    2015-01-01

    This paper presents a method of integrating two contrasting sensor systems for studying human interaction with a mechanical system, using piano performance as the case study. Piano technique requires both precise small-scale motion of fingers on the key surfaces and planned large-scale movement of the hands and arms. Where studies of performance often focus on one of these scales in isolation, this paper investigates the relationship between them. Two sensor systems were installed on an acoustic grand piano: a monocular high-speed camera tracking the position of painted markers on the hands, and capacitive touch sensors attach to the key surfaces which measure the location of finger-key contacts. This paper highlights a method of fusing the data from these systems, including temporal and spatial alignment, segmentation into notes and automatic fingering annotation. Three case studies demonstrate the utility of the multi-sensor data: analysis of finger flexion or extension based on touch and camera marker location, timing analysis of finger-key contact preceding and following key presses, and characterization of individual finger movements in the transitions between successive key presses. Piano performance is the focus of this paper, but the sensor method could equally apply to other fine motor control scenarios, with applications to human-computer interaction. PMID:26082732

  2. Robust Sliding Mode Control Based on GA Optimization and CMAC Compensation for Lower Limb Exoskeleton.

    PubMed

    Long, Yi; Du, Zhi-Jiang; Wang, Wei-Dong; Dong, Wei

    2016-01-01

    A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems.

  3. Real Time Target Tracking in a Phantom Using Ultrasonic Imaging

    NASA Astrophysics Data System (ADS)

    Xiao, X.; Corner, G.; Huang, Z.

    In this paper we present a real-time ultrasound image guidance method suitable for tracking the motion of tumors. A 2D ultrasound based motion tracking system was evaluated. A robot was used to control the focused ultrasound and position it at the target that has been segmented from a real-time ultrasound video. Tracking accuracy and precision were investigated using a lesion mimicking phantom. Experiments have been conducted and results show sufficient efficiency of the image guidance algorithm. This work could be developed as the foundation for combining the real time ultrasound imaging tracking and MRI thermometry monitoring non-invasive surgery.

  4. The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1977-01-01

    An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.

  5. The Application of Leap Motion in Astronaut Virtual Training

    NASA Astrophysics Data System (ADS)

    Qingchao, Xie; Jiangang, Chao

    2017-03-01

    With the development of computer vision, virtual reality has been applied in astronaut virtual training. As an advanced optic equipment to track hand, Leap Motion can provide precise and fluid tracking of hands. Leap Motion is suitable to be used as gesture input device in astronaut virtual training. This paper built an astronaut virtual training based Leap Motion, and established the mathematics model of hands occlusion. At last the ability of Leap Motion to handle occlusion was analysed. A virtual assembly simulation platform was developed for astronaut training, and occlusion gesture would influence the recognition process. The experimental result can guide astronaut virtual training.

  6. On-track test of tilt control strategies for less motion sickness on tilting trains

    NASA Astrophysics Data System (ADS)

    Persson, Rickard; Kufver, Björn; Berg, Mats

    2012-07-01

    Carbody tilting is today a mature and inexpensive technology that permits higher train speeds in horizontal curves, thus shortening travel time. However, tilting trains run a greater risk of causing motion sickness than non-tilting ones. It is likely that the difference in motions between the two train types contributes to the observed difference in risk of motion sickness. Decreasing the risk of motion sickness has until now been equal to increasing the discomfort related to quasi-static lateral acceleration. But, there is a difference in time perception between discomfort caused by quasi-static quantities and motion sickness, which opens up for new solutions. One proposed strategy is to let the local track conditions influence the tilt and give each curve its own optimised tilt angle. This is made possible by new tilt algorithms, storing track data and using a positioning system to select the appropriate data. The present paper reports from on-track tests involving more than 100 test subjects onboard a tilting train. A technical approach is taken evaluating the effectiveness of the new tilt algorithms and the different requirements on quasi-static lateral acceleration and lateral jerk in relative terms. The evaluation verifies that the rms values important for motion sickness can be influenced without changing the requirements on quasi-static lateral acceleration and lateral jerk. The evaluation shows that reduced quantities of motions assumed to have a relation to motion sickness also lead to a reduction in experienced motion sickness. However, a limitation of applicability is found as the lowest risk of motion sickness was not recorded for the test case with motions closest to those of a non-tilting train. An optimal level of tilt, different from no tilt at all, is obtained. This non-linear relation has been observed by other researchers in laboratory tests.

  7. Dynamic tumor tracking using the Elekta Agility MLC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fast, Martin F., E-mail: martin.fast@icr.ac.uk; Nill, Simeon, E-mail: simeon.nill@icr.ac.uk; Bedford, James L.

    2014-11-01

    Purpose: To evaluate the performance of the Elekta Agility multileaf collimator (MLC) for dynamic real-time tumor tracking. Methods: The authors have developed a new control software which interfaces to the Agility MLC to dynamically program the movement of individual leaves, the dynamic leaf guides (DLGs), and the Y collimators (“jaws”) based on the actual target trajectory. A motion platform was used to perform dynamic tracking experiments with sinusoidal trajectories. The actual target positions reported by the motion platform at 20, 30, or 40 Hz were used as shift vectors for the MLC in beams-eye-view. The system latency of the MLCmore » (i.e., the average latency comprising target device reporting latencies and MLC adjustment latency) and the geometric tracking accuracy were extracted from a sequence of MV portal images acquired during irradiation for the following treatment scenarios: leaf-only motion, jaw + leaf motion, and DLG + leaf motion. Results: The portal imager measurements indicated a clear dependence of the system latency on the target position reporting frequency. Deducting the effect of the target frequency, the leaf adjustment latency was measured to be 38 ± 3 ms for a maximum target speed v of 13 mm/s. The jaw + leaf adjustment latency was 53 ± 3 at a similar speed. The system latency at a target position frequency of 30 Hz was in the range of 56–61 ms for the leaves (v ≤ 31 mm/s), 71–78 ms for the jaw + leaf motion (v ≤ 25 mm/s), and 58–72 ms for the DLG + leaf motion (v ≤ 59 mm/s). The tracking accuracy showed a similar dependency on the target position frequency and the maximum target speed. For the leaves, the root-mean-squared error (RMSE) was between 0.6–1.5 mm depending on the maximum target speed. For the jaw + leaf (DLG + leaf) motion, the RMSE was between 0.7–1.5 mm (1.9–3.4 mm). Conclusions: The authors have measured the latency and geometric accuracy of the Agility MLC, facilitating its future use for clinical tracking applications.« less

  8. Eye Tracking of Occluded Self-Moved Targets: Role of Haptic Feedback and Hand-Target Dynamics.

    PubMed

    Danion, Frederic; Mathew, James; Flanagan, J Randall

    2017-01-01

    Previous studies on smooth pursuit eye movements have shown that humans can continue to track the position of their hand, or a target controlled by the hand, after it is occluded, thereby demonstrating that arm motor commands contribute to the prediction of target motion driving pursuit eye movements. Here, we investigated this predictive mechanism by manipulating both the complexity of the hand-target mapping and the provision of haptic feedback. Two hand-target mappings were used, either a rigid (simple) one in which hand and target motion matched perfectly or a nonrigid (complex) one in which the target behaved as a mass attached to the hand by means of a spring. Target animation was obtained by asking participants to oscillate a lightweight robotic device that provided (or not) haptic feedback consistent with the target dynamics. Results showed that as long as 7 s after target occlusion, smooth pursuit continued to be the main contributor to total eye displacement (∼60%). However, the accuracy of eye-tracking varied substantially across experimental conditions. In general, eye-tracking was less accurate under the nonrigid mapping, as reflected by higher positional and velocity errors. Interestingly, haptic feedback helped to reduce the detrimental effects of target occlusion when participants used the nonrigid mapping, but not when they used the rigid one. Overall, we conclude that the ability to maintain smooth pursuit in the absence of visual information can extend to complex hand-target mappings, but the provision of haptic feedback is critical for the maintenance of accurate eye-tracking performance.

  9. Eye Tracking of Occluded Self-Moved Targets: Role of Haptic Feedback and Hand-Target Dynamics

    PubMed Central

    Mathew, James

    2017-01-01

    Abstract Previous studies on smooth pursuit eye movements have shown that humans can continue to track the position of their hand, or a target controlled by the hand, after it is occluded, thereby demonstrating that arm motor commands contribute to the prediction of target motion driving pursuit eye movements. Here, we investigated this predictive mechanism by manipulating both the complexity of the hand-target mapping and the provision of haptic feedback. Two hand-target mappings were used, either a rigid (simple) one in which hand and target motion matched perfectly or a nonrigid (complex) one in which the target behaved as a mass attached to the hand by means of a spring. Target animation was obtained by asking participants to oscillate a lightweight robotic device that provided (or not) haptic feedback consistent with the target dynamics. Results showed that as long as 7 s after target occlusion, smooth pursuit continued to be the main contributor to total eye displacement (∼60%). However, the accuracy of eye-tracking varied substantially across experimental conditions. In general, eye-tracking was less accurate under the nonrigid mapping, as reflected by higher positional and velocity errors. Interestingly, haptic feedback helped to reduce the detrimental effects of target occlusion when participants used the nonrigid mapping, but not when they used the rigid one. Overall, we conclude that the ability to maintain smooth pursuit in the absence of visual information can extend to complex hand-target mappings, but the provision of haptic feedback is critical for the maintenance of accurate eye-tracking performance. PMID:28680964

  10. Hand gesture recognition in confined spaces with partial observability and occultation constraints

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human activity detection and recognition capabilities have broad applications for military and homeland security. These tasks are very complicated, however, especially when multiple persons are performing concurrent activities in confined spaces that impose significant obstruction, occultation, and observability uncertainty. In this paper, our primary contribution is to present a dedicated taxonomy and kinematic ontology that are developed for in-vehicle group human activities (IVGA). Secondly, we describe a set of hand-observable patterns that represents certain IVGA examples. Thirdly, we propose two classifiers for hand gesture recognition and compare their performance individually and jointly. Finally, we present a variant of Hidden Markov Model for Bayesian tracking, recognition, and annotation of hand motions, which enables spatiotemporal inference to human group activity perception and understanding. To validate our approach, synthetic (graphical data from virtual environment) and real physical environment video imagery are employed to verify the performance of these hand gesture classifiers, while measuring their efficiency and effectiveness based on the proposed Hidden Markov Model for tracking and interpreting dynamic spatiotemporal IVGA scenarios.

  11. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  12. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  13. A Novel Kalman Filter for Human Motion Tracking With an Inertial-Based Dynamic Inclinometer.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo M

    2015-08-01

    Design and development of a linear Kalman filter to create an inertial-based inclinometer targeted to dynamic conditions of motion. The estimation of the body attitude (i.e., the inclination with respect to the vertical) was treated as a source separation problem to discriminate the gravity and the body acceleration from the specific force measured by a triaxial accelerometer. The sensor fusion between triaxial gyroscope and triaxial accelerometer data was performed using a linear Kalman filter. Wrist-worn inertial measurement unit data from ten participants were acquired while performing two dynamic tasks: 60-s sequence of seven manual activities and 90 s of walking at natural speed. Stereophotogrammetric data were used as a reference. A statistical analysis was performed to assess the significance of the accuracy improvement over state-of-the-art approaches. The proposed method achieved, on an average, a root mean square attitude error of 3.6° and 1.8° in manual activities and locomotion tasks (respectively). The statistical analysis showed that, when compared to few competing methods, the proposed method improved the attitude estimation accuracy. A novel Kalman filter for inertial-based attitude estimation was presented in this study. A significant accuracy improvement was achieved over state-of-the-art approaches, due to a filter design that better matched the basic optimality assumptions of Kalman filtering. Human motion tracking is the main application field of the proposed method. Accurately discriminating the two components present in the triaxial accelerometer signal is well suited for studying both the rotational and the linear body kinematics.

  14. Real-time intra-fraction-motion tracking using the treatment couch: a feasibility study

    NASA Astrophysics Data System (ADS)

    D'Souza, Warren D.; Naqvi, Shahid A.; Yu, Cedric X.

    2005-09-01

    Significant differences between planned and delivered treatments may occur due to respiration-induced tumour motion, leading to underdosing of parts of the tumour and overdosing of parts of the surrounding critical structures. Existing methods proposed to counter tumour motion include breath-holds, gating and MLC-based tracking. Breath-holds and gating techniques increase treatment time considerably, whereas MLC-based tracking is limited to two dimensions. We present an alternative solution in which a robotic couch moves in real time in response to organ motion. To demonstrate proof-of-principle, we constructed a miniature adaptive couch model consisting of two movable platforms that simulate tumour motion and couch motion, respectively. These platforms were connected via an electronic feedback loop so that the bottom platform responded to the motion of the top platform. We tested our model with a seven-field step-and-shoot delivery case in which we performed three film-based experiments: (1) static geometry, (2) phantom-only motion and (3) phantom motion with simulated couch motion. Our measurements demonstrate that the miniature couch was able to compensate for phantom motion to the extent that the dose distributions were practically indistinguishable from those in static geometry. Motivated by this initial success, we investigated a real-time couch compensation system consisting of a stereoscopic infra-red camera system interfaced to a robotic couch known as the Hexapod™, which responds in real time to any change in position detected by the cameras. Optical reflectors placed on a solid water phantom were used as surrogates for motion. We tested the effectiveness of couch-based motion compensation for fixed fields and a dynamic arc delivery cases. Due to hardware limitations, we performed film-based experiments (1), (2) and (3), with the robotic couch at a phantom motion period and dose rate of 16 s and 100 MU min-1, respectively. Analysis of film measurements showed near-equivalent dose distributions (<=2 mm agreement of corresponding isodose lines) for static geometry and motion-synchronized real-time robotic couch tracking-based radiation delivery.

  15. Development of haptic system for surgical robot

    NASA Astrophysics Data System (ADS)

    Gang, Han Gyeol; Park, Jiong Min; Choi, Seung-Bok; Sohn, Jung Woo

    2017-04-01

    In this paper, a new type of haptic system for surgical robot application is proposed and its performances are evaluated experimentally. The proposed haptic system consists of an effective master device and a precision slave robot. The master device has 3-DOF rotational motion as same as human wrist motion. It has lightweight structure with a gyro sensor and three small-sized MR brakes for position measurement and repulsive torque generation, respectively. The slave robot has 3-DOF rotational motion using servomotors, five bar linkage and a torque sensor is used to measure resistive torque. It has been experimentally demonstrated that the proposed haptic system has good performances on tracking control of desired position and repulsive torque. It can be concluded that the proposed haptic system can be effectively applied to the surgical robot system in real field.

  16. An experimental comparison of conventional two-bank and novel four-bank dynamic MLC tracking.

    PubMed

    Davies, G A; Clowes, P; McQuaid, D; Evans, P M; Webb, S; Poludniowski, G

    2013-03-07

    The AccuLeaf mMLC featuring four multileaf-collimator (MLC) banks has been used for the first time for an experimental comparison of conventional two-bank with novel four-bank dynamic MLC tracking of a two-dimensional sinusoidal respiratory motion. This comparison was performed for a square aperture, and for three conformal treatment apertures from clinical radiotherapy lung cancer patients. The system latency of this prototype tracking system was evaluated and found to be 1.0 s and the frequency at which MLC positions could be updated, 1 Hz, and therefore accurate MLC tracking of irregular patient motion would be difficult with the system in its current form. The MLC leaf velocity required for two-bank-MLC and four-bank-MLC tracking was evaluated for the apertures studied and a substantial decrease was found in the maximum MLC velocity required when four-banks were used for tracking rather than two. A dosimetric comparison of the two techniques was also performed and minimal difference was found between two-bank-MLC and four-bank-MLC tracking. The use of four MLC banks for dynamic MLC tracking is shown to be potentially advantageous for increasing the delivery efficiency compared with two-bank-MLC tracking where difficulties are encountered if large leaf shifts are required to track motion perpendicular to the direction of leaf travel.

  17. SU-D-207-05: Real-Time Intrafractional Motion Tracking During VMAT Delivery Using a Conventional Elekta CBCT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Yang-Kyun; Sharp, Gregory C.; Gierga, David P.

    2015-06-15

    Purpose: Real-time kV projection streaming capability has become recently available for Elekta XVI version 5.0. This study aims to investigate the feasibility and accuracy of real-time fiducial marker tracking during CBCT acquisition with or without simultaneous VMAT delivery using a conventional Elekta linear accelerator. Methods: A client computer was connected to an on-board kV imaging system computer, and receives and processes projection images immediately after image acquisition. In-house marker tracking software based on FFT normalized cross-correlation was developed and installed in the client computer. Three gold fiducial markers with 3 mm length were implanted in a pelvis-shaped phantom with 36more » cm width. The phantom was placed on a programmable motion platform oscillating in anterior-posterior and superior-inferior directions simultaneously. The marker motion was tracked in real-time for (1) a kV-only CBCT scan with treatment beam off and (2) a kV CBCT scan during a 6-MV VMAT delivery. The exposure parameters per projection were 120 kVp and 1.6 mAs. Tracking accuracy was assessed by comparing superior-inferior positions between the programmed and tracked trajectories. Results: The projection images were successfully transferred to the client computer at a frequency of about 5 Hz. In the kV-only scan, highly accurate marker tracking was achieved over the entire range of cone-beam projection angles (detection rate / tracking error were 100.0% / 0.6±0.5 mm). In the kV-VMAT scan, MV-scatter degraded image quality, particularly for lateral projections passing through the thickest part of the phantom (kV source angle ranging 70°-110° and 250°-290°), resulting in a reduced detection rate (90.5%). If the lateral projections are excluded, tracking performance was comparable to the kV-only case (detection rate / tracking error were 100.0% / 0.8±0.5 mm). Conclusion: Our phantom study demonstrated a promising Result for real-time motion tracking using a conventional Elekta linear accelerator. MV-scatter suppression is needed to improve tracking accuracy during MV delivery. This research is funded by Motion Management Research Grant from Elekta.« less

  18. An automated data exploitation system for airborne sensors

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    Advanced wide area persistent surveillance (WAPS) sensor systems on manned or unmanned airborne vehicles are essential for wide-area urban security monitoring in order to protect our people and our warfighter from terrorist attacks. Currently, human (imagery) analysts process huge data collections from full motion video (FMV) for data exploitation and analysis (real-time and forensic), providing slow and inaccurate results. An Automated Data Exploitation System (ADES) is urgently needed. In this paper, we present a recently developed ADES for airborne vehicles under heavy urban background clutter conditions. This system includes four processes: (1) fast image registration, stabilization, and mosaicking; (2) advanced non-linear morphological moving target detection; (3) robust multiple target (vehicles, dismounts, and human) tracking (up to 100 target tracks); and (4) moving or static target/object recognition (super-resolution). Test results with real FMV data indicate that our ADES can reliably detect, track, and recognize multiple vehicles under heavy urban background clutters. Furthermore, our example shows that ADES as a baseline platform can provide capability for vehicle abnormal behavior detection to help imagery analysts quickly trace down potential threats and crimes.

  19. Figure–ground discrimination behavior in Drosophila. I. Spatial organization of wing-steering responses

    PubMed Central

    Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.

    2014-01-01

    The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267

  20. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  1. Antimicrobial Susceptibility Test with Plasmonic Imaging and Tracking of Single Bacterial Motions on Nanometer Scale.

    PubMed

    Syal, Karan; Iriya, Rafael; Yang, Yunze; Yu, Hui; Wang, Shaopeng; Haydel, Shelley E; Chen, Hong-Yuan; Tao, Nongjian

    2016-01-26

    Antimicrobial susceptibility tests (ASTs) are important for confirming susceptibility to empirical antibiotics and detecting resistance in bacterial isolates. Currently, most ASTs performed in clinical microbiology laboratories are based on bacterial culturing, which take days to complete for slowly growing microorganisms. A faster AST will reduce morbidity and mortality rates and help healthcare providers administer narrow spectrum antibiotics at the earliest possible treatment stage. We report the development of a nonculture-based AST using a plasmonic imaging and tracking (PIT) technology. We track the motion of individual bacterial cells tethered to a surface with nanometer (nm) precision and correlate the phenotypic motion with bacterial metabolism and antibiotic action. We show that antibiotic action significantly slows down bacterial motion, which can be quantified for development of a rapid phenotypic-based AST.

  2. SU-F-303-11: Implementation and Applications of Rapid, SIFT-Based Cine MR Image Binning and Region Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazur, T; Wang, Y; Fischer-Valuck, B

    2015-06-15

    Purpose: To develop a novel and rapid, SIFT-based algorithm for assessing feature motion on cine MR images acquired during MRI-guided radiotherapy treatments. In particular, we apply SIFT descriptors toward both partitioning cine images into respiratory states and tracking regions across frames. Methods: Among a training set of images acquired during a fraction, we densely assign SIFT descriptors to pixels within the images. We cluster these descriptors across all frames in order to produce a dictionary of trackable features. Associating the best-matching descriptors at every frame among the training images to these features, we construct motion traces for the features. Wemore » use these traces to define respiratory bins for sorting images in order to facilitate robust pixel-by-pixel tracking. Instead of applying conventional methods for identifying pixel correspondences across frames we utilize a recently-developed algorithm that derives correspondences via a matching objective for SIFT descriptors. Results: We apply these methods to a collection of lung, abdominal, and breast patients. We evaluate the procedure for respiratory binning using target sites exhibiting high-amplitude motion among 20 lung and abdominal patients. In particular, we investigate whether these methods yield minimal variation between images within a bin by perturbing the resulting image distributions among bins. Moreover, we compare the motion between averaged images across respiratory states to 4DCT data for these patients. We evaluate the algorithm for obtaining pixel correspondences between frames by tracking contours among a set of breast patients. As an initial case, we track easily-identifiable edges of lumpectomy cavities that show minimal motion over treatment. Conclusions: These SIFT-based methods reliably extract motion information from cine MR images acquired during patient treatments. While we performed our analysis retrospectively, the algorithm lends itself to prospective motion assessment. Applications of these methods include motion assessment, identifying treatment windows for gating, and determining optimal margins for treatment.« less

  3. A Novel Method for Tracking Individuals of Fruit Fly Swarms Flying in a Laboratory Flight Arena.

    PubMed

    Cheng, Xi En; Qian, Zhi-Ming; Wang, Shuo Hong; Jiang, Nan; Guo, Aike; Chen, Yan Qiu

    2015-01-01

    The growing interest in studying social behaviours of swarming fruit flies, Drosophila melanogaster, has heightened the need for developing tools that provide quantitative motion data. To achieve such a goal, multi-camera three-dimensional tracking technology is the key experimental gateway. We have developed a novel tracking system for tracking hundreds of fruit flies flying in a confined cubic flight arena. In addition to the proposed tracking algorithm, this work offers additional contributions in three aspects: body detection, orientation estimation, and data validation. To demonstrate the opportunities that the proposed system offers for generating high-throughput quantitative motion data, we conducted experiments on five experimental configurations. We also performed quantitative analysis on the kinematics and the spatial structure and the motion patterns of fruit fly swarms. We found that there exists an asymptotic distance between fruit flies in swarms as the population density increases. Further, we discovered the evidence for repulsive response when the distance between fruit flies approached the asymptotic distance. Overall, the proposed tracking system presents a powerful method for studying flight behaviours of fruit flies in a three-dimensional environment.

  4. Three-Dimensional High-Resolution Optical/X-Ray Stereoscopic Tracking Velocimetry

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung S.; Ramachandran, Narayanan

    2004-01-01

    Measurement of three-dimensional (3-D) three-component velocity fields is of great importance in a variety of research and industrial applications for understanding materials processing, fluid physics, and strain/displacement measurements. The 3-D experiments in these fields most likely inhibit the use of conventional techniques, which are based only on planar and optically-transparent-field observation. Here, we briefly review the current status of 3-D diagnostics for motion/velocity detection, for both optical and x-ray systems. As an initial step for providing 3-D capabilities, we nave developed stereoscopic tracking velocimetry (STV) to measure 3-D flow/deformation through optical observation. The STV is advantageous in system simplicity, for continually observing 3- D phenomena in near real-time. In an effort to enhance the data processing through automation and to avoid the confusion in tracking numerous markers or particles, artificial neural networks are employed to incorporate human intelligence. Our initial optical investigations have proven the STV to be a very viable candidate for reliably measuring 3-D flow motions. With previous activities are focused on improving the processing efficiency, overall accuracy, and automation based on the optical system, the current efforts is directed to the concurrent expansion to the x-ray system for broader experimental applications.

  5. Three-Dimensional High-Resolution Optical/X-Ray Stereoscopic Tracking Velocimetry

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung S.; Ramachandran, Naryanan

    2005-01-01

    Measurement of three-dimensional (3-D) three-component velocity fields is of great importance in a variety of research and industrial applications for understanding materials processing, fluid physics, and strain/displacement measurements. The 3-D experiments in these fields most likely inhibit the use of conventional techniques, which are based only on planar and optically-transparent-field observation. Here, we briefly review the current status of 3-D diagnostics for motion/velocity detection, for both optical and x-ray systems. As an initial step for providing 3-D capabilities, we have developed stereoscopic tracking velocimetry (STV) to measure 3-D flow/deformation through optical observation. The STV is advantageous in system simplicity, for continually observing 3-D phenomena in near real-time. In an effort to enhance the data processing through automation and to avoid the confusion in tracking numerous markers or particles, artificial neural networks are employed to incorporate human intelligence. Our initial optical investigations have proven the STV to be a very viable candidate for reliably measuring 3-D flow motions. With previous activities focused on improving the processing efficiency, overall accuracy, and automation based on the optical system, the current efforts is directed to the concurrent expansion to the x-ray system for broader experimental applications.

  6. Position Tracking During Human Walking Using an Integrated Wearable Sensing System.

    PubMed

    Zizzo, Giulio; Ren, Lei

    2017-12-10

    Progress has been made enabling expensive, high-end inertial measurement units (IMUs) to be used as tracking sensors. However, the cost of these IMUs is prohibitive to their widespread use, and hence the potential of low-cost IMUs is investigated in this study. A wearable low-cost sensing system consisting of IMUs and ultrasound sensors was developed. Core to this system is an extended Kalman filter (EKF), which provides both zero-velocity updates (ZUPTs) and Heuristic Drift Reduction (HDR). The IMU data was combined with ultrasound range measurements to improve accuracy. When a map of the environment was available, a particle filter was used to impose constraints on the possible user motions. The system was therefore composed of three subsystems: IMUs, ultrasound sensors, and a particle filter. A Vicon motion capture system was used to provide ground truth information, enabling validation of the sensing system. Using only the IMU, the system showed loop misclosure errors of 1% with a maximum error of 4-5% during walking. The addition of the ultrasound sensors resulted in a 15% reduction in the total accumulated error. Lastly, the particle filter was capable of providing noticeable corrections, which could keep the tracking error below 2% after the first few steps.

  7. Sliding Mode Control of Real-Time PNU Vehicle Driving Simulator and Its Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Min Cheol; Park, Min Kyu; Yoo, Wan Suk; Son, Kwon; Han, Myung Chul

    This paper introduces an economical and effective full-scale driving simulator for study of human sensibility and development of new vehicle parts and its control. Real-time robust control to accurately reappear a various vehicle motion may be a difficult task because the motion platform is the nonlinear complex system. This study proposes the sliding mode controller with a perturbation compensator using observer-based fuzzy adaptive network (FAN). This control algorithm is designed to solve the chattering problem of a sliding mode control and to select the adequate fuzzy parameters of the perturbation compensator. For evaluating the trajectory control performance of the proposed approach, a tracking control of the developed simulator named PNUVDS is experimentally carried out. And then, the driving performance of the simulator is evaluated by using human perception and sensibility of some drivers in various driving conditions.

  8. Design of energy harvesting systems for harnessing vibrational motion from human and vehicular motion

    NASA Astrophysics Data System (ADS)

    Wickenheiser, Adam; Garcia, Ephrahim

    2010-04-01

    In much of the vibration-based energy harvesting literature, devices are modeled, designed, and tested for dissipating energy across a resistive load at a single base excitation frequency. This paper presents several practical scenarios germane to tracking, sensing, and wireless communication on humans and land vehicles. Measured vibrational data from these platforms are used to provide a time-varying, broadband input to the energy harvesting system. Optimal power considerations are given for several circuit topologies, including a passive rectifier circuit and active, switching methods. Under various size and mass constraints, the optimal design is presented for two scenarios: walking and idling a car. The frequency response functions are given alongside time histories of the power harvested using the experimental base accelerations recorded. The issues involved in designing an energy harvester for practical (i.e. timevarying, non-sinusoidal) applications are discussed.

  9. Illusory bending of a rigidly moving line segment: effects of image motion and smooth pursuit eye movements.

    PubMed

    Thaler, Lore; Todd, James T; Spering, Miriam; Gegenfurtner, Karl R

    2007-04-20

    Four experiments in which observers judged the apparent "rubberiness" of a line segment undergoing different types of rigid motion are reported. The results reveal that observers perceive illusory bending when the motion involves certain combinations of translational and rotational components and that the illusion is maximized when these components are presented at a frequency of approximately 3 Hz with a relative phase angle of approximately 120 degrees . Smooth pursuit eye movements can amplify or attenuate the illusion, which is consistent with other results reported in the literature that show effects of eye movements on perceived image motion. The illusion is unaffected by background motion that is in counterphase with the motion of the line segment but is significantly attenuated by background motion that is in-phase. This is consistent with the idea that human observers integrate motion signals within a local frame of reference, and it provides strong evidence that visual persistency cannot be the sole cause of the illusion as was suggested by J. R. Pomerantz (1983). An analysis of the motion patterns suggests that the illusory bending motion may be due to an inability of observers to accurately track the motions of features whose image displacements undergo rapid simultaneous changes in both space and time. A measure of these changes is presented, which is highly correlated with observers' numerical ratings of rubberiness.

  10. A Simulation Study of a Radiofrequency Localization System for Tracking Patient Motion in Radiotherapy.

    PubMed

    Ostyn, Mark; Kim, Siyong; Yeo, Woon-Hong

    2016-04-13

    One of the most widely used tools in cancer treatment is external beam radiotherapy. However, the major risk involved in radiotherapy is excess radiation dose to healthy tissue, exacerbated by patient motion. Here, we present a simulation study of a potential radiofrequency (RF) localization system designed to track intrafraction motion (target motion during the radiation treatment). This system includes skin-wearable RF beacons and an external tracking system. We develop an analytical model for direction of arrival measurement with radio frequencies (GHz range) for use in a localization estimate. We use a Monte Carlo simulation to investigate the relationship between a localization estimate and angular resolution of sensors (signal receivers) in a simulated room. The results indicate that the external sensor needs an angular resolution of about 0.03 degrees to achieve millimeter-level localization accuracy in a treatment room. This fundamental study of a novel RF localization system offers the groundwork to design a radiotherapy-compatible patient positioning system for active motion compensation.

  11. The Influence of Tactual Seat-motion Cues on Training and Performance in a Roll-axis Compensatory Tracking Task Setting

    DTIC Science & Technology

    2008-05-01

    AFRL-RH-WP-SR-2009-0002 The Influence of Tactual Seat-motion Cues on Training and Performance in a Roll-axis Compensatory Tracking Task...and Performance in a Roll-axis Compensatory Tracking Task Setting 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62202F 6. AUTHOR(S...simulated vehicle having aircraft-like dynamics. A centrally located compensatory display, subtending about nine degrees, provided visual roll error

  12. Feature point based 3D tracking of multiple fish from multi-view images

    PubMed Central

    Qian, Zhi-Ming

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966

  13. Feature point based 3D tracking of multiple fish from multi-view images.

    PubMed

    Qian, Zhi-Ming; Chen, Yan Qiu

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.

  14. Fiducial marker-based correction for involuntary motion in weight-bearing C-arm CT scanning of knees. Part I. Numerical model-based optimization

    PubMed Central

    Choi, Jang-Hwan; Fahrig, Rebecca; Keil, Andreas; Besier, Thor F.; Pal, Saikat; McWalter, Emily J.; Beaupré, Gary S.; Maier, Andreas

    2013-01-01

    Purpose: Human subjects in standing positions are apt to show much more involuntary motion than in supine positions. The authors aimed to simulate a complicated realistic lower body movement using the four-dimensional (4D) digital extended cardiac-torso (XCAT) phantom. The authors also investigated fiducial marker-based motion compensation methods in two-dimensional (2D) and three-dimensional (3D) space. The level of involuntary movement-induced artifacts and image quality improvement were investigated after applying each method. Methods: An optical tracking system with eight cameras and seven retroreflective markers enabled us to track involuntary motion of the lower body of nine healthy subjects holding a squat position at 60° of flexion. The XCAT-based knee model was developed using the 4D XCAT phantom and the optical tracking data acquired at 120 Hz. The authors divided the lower body in the XCAT into six parts and applied unique affine transforms to each so that the motion (6 degrees of freedom) could be synchronized with the optical markers’ location at each time frame. The control points of the XCAT were tessellated into triangles and 248 projection images were created based on intersections of each ray and monochromatic absorption. The tracking data sets with the largest motion (Subject 2) and the smallest motion (Subject 5) among the nine data sets were used to animate the XCAT knee model. The authors defined eight skin control points well distributed around the knees as pseudo-fiducial markers which functioned as a reference in motion correction. Motion compensation was done in the following ways: (1) simple projection shifting in 2D, (2) deformable projection warping in 2D, and (3) rigid body warping in 3D. Graphics hardware accelerated filtered backprojection was implemented and combined with the three correction methods in order to speed up the simulation process. Correction fidelity was evaluated as a function of number of markers used (4–12) and marker distribution in three scenarios. Results: Average optical-based translational motion for the nine subjects was 2.14 mm (±0.69 mm) and 2.29 mm (±0.63 mm) for the right and left knee, respectively. In the representative central slices of Subject 2, the authors observed 20.30%, 18.30%, and 22.02% improvements in the structural similarity (SSIM) index with 2D shifting, 2D warping, and 3D warping, respectively. The performance of 2D warping improved as the number of markers increased up to 12 while 2D shifting and 3D warping were insensitive to the number of markers used. The minimum required number of markers for 2D shifting, 2D warping, and 3D warping was 4–6, 12, and 8, respectively. An even distribution of markers over the entire field of view provided robust performance for all three correction methods. Conclusions: The authors were able to simulate subject-specific realistic knee movement in weight-bearing positions. This study indicates that involuntary motion can seriously degrade the image quality. The proposed three methods were evaluated with the numerical knee model; 3D warping was shown to outperform the 2D methods. The methods are shown to significantly reduce motion artifacts if an appropriate marker setup is chosen. PMID:24007156

  15. Fiducial marker-based correction for involuntary motion in weight-bearing C-arm CT scanning of knees. Part I. Numerical model-based optimization.

    PubMed

    Choi, Jang-Hwan; Fahrig, Rebecca; Keil, Andreas; Besier, Thor F; Pal, Saikat; McWalter, Emily J; Beaupré, Gary S; Maier, Andreas

    2013-09-01

    Human subjects in standing positions are apt to show much more involuntary motion than in supine positions. The authors aimed to simulate a complicated realistic lower body movement using the four-dimensional (4D) digital extended cardiac-torso (XCAT) phantom. The authors also investigated fiducial marker-based motion compensation methods in two-dimensional (2D) and three-dimensional (3D) space. The level of involuntary movement-induced artifacts and image quality improvement were investigated after applying each method. An optical tracking system with eight cameras and seven retroreflective markers enabled us to track involuntary motion of the lower body of nine healthy subjects holding a squat position at 60° of flexion. The XCAT-based knee model was developed using the 4D XCAT phantom and the optical tracking data acquired at 120 Hz. The authors divided the lower body in the XCAT into six parts and applied unique affine transforms to each so that the motion (6 degrees of freedom) could be synchronized with the optical markers' location at each time frame. The control points of the XCAT were tessellated into triangles and 248 projection images were created based on intersections of each ray and monochromatic absorption. The tracking data sets with the largest motion (Subject 2) and the smallest motion (Subject 5) among the nine data sets were used to animate the XCAT knee model. The authors defined eight skin control points well distributed around the knees as pseudo-fiducial markers which functioned as a reference in motion correction. Motion compensation was done in the following ways: (1) simple projection shifting in 2D, (2) deformable projection warping in 2D, and (3) rigid body warping in 3D. Graphics hardware accelerated filtered backprojection was implemented and combined with the three correction methods in order to speed up the simulation process. Correction fidelity was evaluated as a function of number of markers used (4-12) and marker distribution in three scenarios. Average optical-based translational motion for the nine subjects was 2.14 mm (± 0.69 mm) and 2.29 mm (± 0.63 mm) for the right and left knee, respectively. In the representative central slices of Subject 2, the authors observed 20.30%, 18.30%, and 22.02% improvements in the structural similarity (SSIM) index with 2D shifting, 2D warping, and 3D warping, respectively. The performance of 2D warping improved as the number of markers increased up to 12 while 2D shifting and 3D warping were insensitive to the number of markers used. The minimum required number of markers for 2D shifting, 2D warping, and 3D warping was 4-6, 12, and 8, respectively. An even distribution of markers over the entire field of view provided robust performance for all three correction methods. The authors were able to simulate subject-specific realistic knee movement in weight-bearing positions. This study indicates that involuntary motion can seriously degrade the image quality. The proposed three methods were evaluated with the numerical knee model; 3D warping was shown to outperform the 2D methods. The methods are shown to significantly reduce motion artifacts if an appropriate marker setup is chosen.

  16. Brownian motion of boomerang colloidal particles.

    PubMed

    Chakrabarty, Ayan; Konya, Andrew; Wang, Feng; Selinger, Jonathan V; Sun, Kai; Wei, Qi-Huo

    2013-10-18

    We investigate the Brownian motion of boomerang colloidal particles confined between two glass plates. Our experimental observations show that the mean displacements are biased towards the center of hydrodynamic stress (CoH), and that the mean-square displacements exhibit a crossover from short-time faster to long-time slower diffusion with the short-time diffusion coefficients dependent on the points used for tracking. A model based on Langevin theory elucidates that these behaviors are ascribed to the superposition of two diffusive modes: the ellipsoidal motion of the CoH and the rotational motion of the tracking point with respect to the CoH.

  17. Brownian Motion of Boomerang Colloidal Particles

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Ayan; Konya, Andrew; Wang, Feng; Selinger, Jonathan V.; Sun, Kai; Wei, Qi-Huo

    2013-10-01

    We investigate the Brownian motion of boomerang colloidal particles confined between two glass plates. Our experimental observations show that the mean displacements are biased towards the center of hydrodynamic stress (CoH), and that the mean-square displacements exhibit a crossover from short-time faster to long-time slower diffusion with the short-time diffusion coefficients dependent on the points used for tracking. A model based on Langevin theory elucidates that these behaviors are ascribed to the superposition of two diffusive modes: the ellipsoidal motion of the CoH and the rotational motion of the tracking point with respect to the CoH.

  18. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  19. Measurement of body joint angles for physical therapy based on mean shift tracking using two low cost Kinect images.

    PubMed

    Chen, Y C; Lee, H J; Lin, K H

    2015-08-01

    Range of motion (ROM) is commonly used to assess a patient's joint function in physical therapy. Because motion capture systems are generally very expensive, physical therapists mostly use simple rulers to measure patients' joint angles in clinical diagnosis, which will suffer from low accuracy, low reliability, and subjective. In this study we used color and depth image feature from two sets of low-cost Microsoft Kinect to reconstruct 3D joint positions, and then calculate moveable joint angles to assess the ROM. A Gaussian background model is first used to segment the human body from the depth images. The 3D coordinates of the joints are reconstructed from both color and depth images. To track the location of joints throughout the sequence more precisely, we adopt the mean shift algorithm to find out the center of voxels upon the joints. The two sets of Kinect are placed three meters away from each other and facing to the subject. The joint moveable angles and the motion data are calculated from the position of joints frame by frame. To verify the results of our system, we take the results from a motion capture system called VICON as golden standard. Our 150 test results showed that the deviation of joint moveable angles between those obtained by VICON and our system is about 4 to 8 degree in six different upper limb exercises, which are acceptable in clinical environment.

  20. Potential benefits of dosimetric VMAT tracking verified with 3D film measurements.

    PubMed

    Crijns, Wouter; Defraene, Gilles; Van Herck, Hans; Depuydt, Tom; Haustermans, Karin; Maes, Frederik; Van den Heuvel, Frank

    2016-05-01

    To evaluate three different plan adaptation strategies using 3D film-stack dose measurements of both focal boost and hypofractionated prostate VMAT treatments. The adaptation strategies (a couch shift, geometric tracking, and dosimetric tracking) were applied for three realistic intrafraction prostate motions. A focal boost (35 × 2.2 and 35 × 2.7 Gy) and a hypofractionated (5 × 7.25 Gy) prostate VMAT plan were created for a heterogeneous phantom that allows for internal prostate motion. For these plans geometric tracking and dosimetric tracking were evaluated by ionization chamber (IC) point dose measurements (zero-D) and measurements using a stack of EBT3 films (3D). The geometric tracking applied translations, rotations, and scaling of the MLC aperture in response to realistic prostate motions. The dosimetric tracking additionally corrected the monitor units to resolve variations due to difference in depth, tissue heterogeneity, and MLC-aperture. The tracking was based on the positions of four fiducial points only. The film measurements were compared to the gold standard (i.e., IC measurements) and the planned dose distribution. Additionally, the 3D measurements were converted to dose volume histograms, tumor control probability, and normal tissue complication probability parameters (DVH/TCP/NTCP) as a direct estimate of clinical relevance of the proposed tracking. Compared to the planned dose distribution, measurements without prostate motion and tracking showed already a reduced homogeneity of the dose distribution. Adding prostate motion further blurs the DVHs for all treatment approaches. The clinical practice (no tracking) delivered the dose distribution inside the PTV but off target (CTV), resulting in boost dose errors up to 10%. The geometric and dosimetric tracking corrected the dose distribution's position. Moreover, the dosimetric tracking could achieve the planned boost DVH, but not the DVH of the more homogeneously irradiated prostate. A drawback of both the geometric and dosimetric tracking was a reduced MLC blocking caused by the rotational component of the MLC aperture corrections. Because of the used CTV to PTV margins and the high doses in the considered fractionation schemes, the TCP differed less than 0.02 from the planned value for all targets and all correction methods. The rectal NTCP constraints, however, could not be realized using any of these methods. The geometric and dosimetric tracking use only a limited input, but they deposit the dose distribution with higher geometric accuracy than the clinical practice. The latter case has boost dose errors up to 10%. The increased accuracy has a modest impact [Δ(NT)CP < 0.02] because of the applied margins and the high dose levels used. To allow further margin reduction tracking methods are vital. The proposed methodology could further be improved by implementing a rotational correction using collimator rotations.

  1. Siamese convolutional networks for tracking the spine motion

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong

    2017-09-01

    Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.

  2. Three-dimensional finite element modelling of muscle forces during mastication.

    PubMed

    Röhrle, Oliver; Pullan, Andrew J

    2007-01-01

    This paper presents a three-dimensional finite element model of human mastication. Specifically, an anatomically realistic model of the masseter muscles and associated bones is used to investigate the dynamics of chewing. A motion capture system is used to track the jaw motion of a subject chewing standard foods. The three-dimensional nonlinear deformation of the masseter muscles are calculated via the finite element method, using the jaw motion data as boundary conditions. Motion-driven muscle activation patterns and a transversely isotropic material law, defined in a muscle-fibre coordinate system, are used in the calculations. Time-force relationships are presented and analysed with respect to different tasks during mastication, e.g. opening, closing, and biting, and are also compared to a more traditional one-dimensional model. The results strongly suggest that, due to the complex arrangement of muscle force directions, modelling skeletal muscles as conventional one-dimensional lines of action might introduce a significant source of error.

  3. Layered motion segmentation and depth ordering by tracking edges.

    PubMed

    Smith, Paul; Drummond, Tom; Cipolla, Roberto

    2004-04-01

    This paper presents a new Bayesian framework for motion segmentation--dividing a frame from an image sequence into layers representing different moving objects--by tracking edges between frames. Edges are found using the Canny edge detector, and the Expectation-Maximization algorithm is then used to fit motion models to these edges and also to calculate the probabilities of the edges obeying each motion model. The edges are also used to segment the image into regions of similar color. The most likely labeling for these regions is then calculated by using the edge probabilities, in association with a Markov Random Field-style prior. The identification of the relative depth ordering of the different motion layers is also determined, as an integral part of the process. An efficient implementation of this framework is presented for segmenting two motions (foreground and background) using two frames. It is then demonstrated how, by tracking the edges into further frames, the probabilities may be accumulated to provide an even more accurate and robust estimate, and segment an entire sequence. Further extensions are then presented to address the segmentation of more than two motions. Here, a hierarchical method of initializing the Expectation-Maximization algorithm is described, and it is demonstrated that the Minimum Description Length principle may be used to automatically select the best number of motion layers. The results from over 30 sequences (demonstrating both two and three motions) are presented and discussed.

  4. Real-time method for motion-compensated MR thermometry and MRgHIFU treatment in abdominal organs.

    PubMed

    Celicanin, Zarko; Auboiroux, Vincent; Bieri, Oliver; Petrusca, Lorena; Santini, Francesco; Viallon, Magalie; Scheffler, Klaus; Salomir, Rares

    2014-10-01

    Magnetic resonance-guided high-intensity focused ultrasound is considered to be a promising treatment for localized cancer in abdominal organs such as liver, pancreas, or kidney. Abdominal motion, anatomical arrangement, and required sustained sonication are the main challenges. MR acquisition consisted of thermometry performed with segmented gradient-recalled echo echo-planar imaging, and a segment-based one-dimensional MR navigator parallel to the main axis of motion to track the organ motion. This tracking information was used in real-time for: (i) prospective motion correction of MR thermometry and (ii) HIFU focal point position lock-on target. Ex vivo experiments were performed on a sheep liver and a turkey pectoral muscle using a motion demonstrator, while in vivo experiments were conducted on two sheep liver. Prospective motion correction of MR thermometry yielded good signal-to-noise ratio (range, 25 to 35) and low geometric distortion due to the use of segmented EPI. HIFU focal point lock-on target yielded isotropic in-plane thermal build-up. The feasibility of in vivo intercostal liver treatment was demonstrated in sheep. The presented method demonstrated in moving phantoms and breathing sheep accurate motion-compensated MR thermometry and precise HIFU focal point lock-on target using only real-time pencil-beam navigator tracking information, making it applicable without any pretreatment data acquisition or organ motion modeling. Copyright © 2013 Wiley Periodicals, Inc.

  5. Time-domain prefilter design for enhanced tracking and vibration suppression in machine motion control

    NASA Astrophysics Data System (ADS)

    Cole, Matthew O. T.; Shinonawanik, Praween; Wongratanaphisan, Theeraphong

    2018-05-01

    Structural flexibility can impact negatively on machine motion control systems by causing unmeasured positioning errors and vibration at locations where accurate motion is important for task execution. To compensate for these effects, command signal prefiltering may be applied. In this paper, a new FIR prefilter design method is described that combines finite-time vibration cancellation with dynamic compensation properties. The time-domain formulation exploits the relation between tracking error and the moment values of the prefilter impulse response function. Optimal design solutions for filters having minimum H2 norm are derived and evaluated. The control approach does not require additional actuation or sensing and can be effective even without complete and accurate models of the machine dynamics. Results from implementation and testing on an experimental high-speed manipulator having a Delta robot architecture with directionally compliant end-effector are presented. The results show the importance of prefilter moment values for tracking performance and confirm that the proposed method can achieve significant reductions in both peak and RMS tracking error, as well as settling time, for complex motion patterns.

  6. Steady-State Pursuit Is Driven by Object Motion Rather Than the Vector Average of Local Motions

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, B. R.; Lorenceau, J. D.; Ahumada, Al (Technical Monitor)

    1997-01-01

    We have previously shown that humans can pursue the motion of objects whose trajectories can be recovered only by spatio-temporal integration of local motion signals. We now explore the integration rule used to derive the target-motion signal driving pursuit. We measured the pursuit response of 4 observers (2 naive) to the motion of a line-figure diamond viewed through two vertical bar apertures (0.2 cd/square m). The comers were always occluded so that only four line segments (93 cd/square m) were visible behind the occluding foreground (38 cd/square m). The diamond was flattened (40 & 140 degree vertex angles) such that vector averaging of the local normal motions and vertical integration (e.g. IOC) yield very I or different predictions, analogous to using a Type II plaid. The diamond moved along Lissajous-figure trajectories (Ax = Ay = 2 degrees; TFx = 0.8 Hz; TFy = 0.4 Hz). We presented only 1.25 cycles and used 6 different randomly interleaved initial relative phases to minimize the role of predictive strategies. Observers were instructed to track the diamond and reported that its motion was always coherent (unlike type II plaids). Saccade-free portions of the horizontal and vertical eye-position traces sampled at 240 Hz were fit by separate sinusoids. Pursuit gain with respect to the diamond averaged 0.7 across subjects and directions. The ratio of the mean vertical to horizontal amplitude of the pursuit response was 1.7 +/- 0.7 averaged across subjects (1SD). This is close to the prediction of 1.0 from vertical motion-integration rules, but far from 7.7 predicted by vector averaging and infinity predicted by segment- or terminator-tracking strategies. Because there is no retinal motion which directly corresponds to the diamond's motion, steady-state pursuit of our "virtual" diamond is not closed-loop in the traditional sense. Thus, accurate pursuit is unlikely to result simply from local retinal negative feedback. We conclude that the signal driving steady-state pursuit is not the vector average of local motion signals, but rather a more vertical estimate of object motion, derived in extrastriate cortical areas beyond V1, perhaps NIT or MST.

  7. Gesture-controlled interfaces for self-service machines and other applications

    NASA Technical Reports Server (NTRS)

    Cohen, Charles J. (Inventor); Jacobus, Charles J. (Inventor); Paul, George (Inventor); Beach, Glenn (Inventor); Foulk, Gene (Inventor); Obermark, Jay (Inventor); Cavell, Brook (Inventor)

    2004-01-01

    A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.

  8. Motion and ranging sensor system for through-the-wall surveillance system

    NASA Astrophysics Data System (ADS)

    Black, Jeffrey D.

    2002-08-01

    A portable Through-the-Wall Surveillance System is being developed for law enforcement, counter-terrorism, and military use. The Motion and Ranging Sensor is a radar that operates in a frequency band that allows for surveillance penetration of most non-metallic walls. Changes in the sensed radar returns are analyzed to detect the human motion that would typically be present during a hostage or barricaded suspect scenario. The system consists of a Sensor Unit, a handheld Remote Display Unit, and an optional laptop computer Command Display Console. All units are battery powered and a wireless link provides command and data communication between units. The Sensor Unit is deployed close to the wall or door through which the surveillance is to occur. After deploying the sensor the operator may move freely as required by the scenario. Up to five Sensor Units may be deployed at a single location. A software upgrade to the Command Display Console is also being developed. This software upgrade will combine the motion detected by multiple Sensor Units and determine and track the location of detected motion in two dimensions.

  9. Comparison of different detection methods for persistent multiple hypothesis tracking in wide area motion imagery

    NASA Astrophysics Data System (ADS)

    Hartung, Christine; Spraul, Raphael; Schuchert, Tobias

    2017-10-01

    Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.

  10. SU-G-JeP1-06: Correlation of Lung Tumor Motion with Tumor Location Using Electromagnetic Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muccigrosso, D; Maughan, N; Parikh, P

    Purpose: It is well known that lung tumors move with respiration. However, most measurements of lung tumor motion have studied long treatment times with intermittent imaging; those populations may not necessarily represent conventional LINAC patients. We summarized the correlation between tumor motion and location in a multi-institutional trial with electromagnetic tracking, and identified the patient cohort that would most benefit from respiratory gating. Methods: Continuous electromagnetic transponder data (Varian Medical, Seattle, WA) of lung tumor motion was collected from 14 patients (214 total fractions) across 3 institutions during external beam radiation therapy in a prospective clinical trial (NCT01396551). External interventionmore » from the clinician, such as couch shifts, instructed breath-holds, and acquisition pauses, were manually removed from the 10 Hz tracking data according to recorded notes. The average three-dimensional displacement from the breathing cycle’s end-expiratory to end-inhalation phases (peak-to-peak distance) of the transponders’ isocenter was calculated for each patient’s treatment. A weighted average of each isocenter was used to assess the effects of location on motion. A total of 14 patients were included in this analysis, grouped by their transponders’ location in the lung: upper, medial, and lower. Results: 8 patients had transponders in the upper lung, and 3 patients each in the medial lobe and lower lung. The weighted average ± standard deviation of all peak-to-peak distances for each group was: 1.04 ± 0.39 cm in the lower lung, 0.56 ± 0.14 cm in the medial lung, and 0.30 ± 0.06 cm in the upper lung. Conclusion: Tumors in the lower lung are most susceptible to excessive motion and daily variation, and would benefit most from continuous motion tracking and gating. Those in the medial lobe might be at moderate risk. The upper lobes have limited motion. These results can guide different motion management strategies between lung tumor locations. This is part of an NIH-funded prospective clinical trial (NCT01396551), using an electromagnetic transponder tracking system and additional funding from Varian Medical (Seattle, WA).« less

  11. WE-DE-BRA-11: A Study of Motion Tracking Accuracy of Robotic Radiosurgery Using a Novel CCD Camera Based End-To-End Test System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L; M Yang, Y; Nelson, B

    Purpose: A novel end-to-end test system using a CCD camera and a scintillator based phantom (XRV-124, Logos Systems Int’l) capable of measuring the beam-by-beam delivery accuracy of Robotic Radiosurgery (CyberKnife) was developed and reported in our previous work. This work investigates its application in assessing the motion tracking (Synchrony) accuracy for CyberKnife. Methods: A QA plan with Anterior and Lateral beams (with 4 different collimator sizes) was created (Multiplan v5.3) for the XRV-124 phantom. The phantom was placed on a motion platform (superior and inferior movement), and the plans were delivered on the CyberKnife M6 system using four motion patterns:more » static, Sine- wave, Sine with 15° phase shift, and a patient breathing pattern composed of 2cm maximum motion with 4 second breathing cycle. Under integral recording mode, the time-averaged beam vectors (X, Y, Z) were measured by the phantom and compared with static delivery. In dynamic recording mode, the beam spots were recorded at a rate of 10 frames/second. The beam vector deviation from average position was evaluated against the various breathing patterns. Results: The average beam position of the six deliveries with no motion and three deliveries with Synchrony tracking on ideal motion (sinewave without phase shift) all agree within −0.03±0.00 mm, 0.10±0.04, and 0.04±0.03 in the X, Y, and X directions. Radiation beam width (FWHM) variations are within ±0.03 mm. Dynamic video record showed submillimeter tracking stability for both regular and irregular breathing pattern; however the tracking error up to 3.5 mm was observed when a 15 degree phase shift was introduced. Conclusion: The XRV-124 system is able to provide 3D and 4D targeting accuracy for CyberKnife delivery with Synchrony. The experimental results showed sub-millimeter delivery in phantom with excellent correlation in target to breathing motion. The accuracy was degraded when irregular motion and phase shift was introduced.« less

  12. SU-G-JeP4-12: Real-Time Organ Motion Monitoring Using Ultrasound and KV Fluoroscopy During Lung SBRT Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omari, E; Tai, A; Li, X

    Purpose: Real-time ultrasound monitoring during SBRT is advantageous in understanding and identifying motion irregularities which may cause geometric misses. In this work, we propose to utilize real-time ultrasound to track the diaphragm in conjunction with periodical kV fluoroscopy to monitor motion of tumor or landmarks during SBRT delivery. Methods: Transabdominal Ultrasound (TAUS) b-mode images were collected from 10 healthy volunteers using the Clarity Autoscan System (Elekta). The autoscan transducer, which has a center frequency of 5 MHz, was utilized for the scans. The acquired images were contoured using the Clarity Automatic Fusion and Contouring workstation software. Monitoring sessions of 5more » minute length were observed and recorded. The position correlation between tumor and diaphragm could be established with periodic kV fluoroscopy periodically acquired during treatment with Elekta XVI. We acquired data using a tissue mimicking ultrasound phantom with embedded spheres placed on a motion stand using ultrasound and kV Fluoroscopy. MIM software was utilized for image fusion. Correlation of diaphragm and target motion was also validated using 4D-MRI and 4D-CBCT. Results: The diaphragm was visualized as a hyperechoic region on the TAUS b-mode images. Volunteer set-up can be adjusted such that TAUS probe will not interfere with treatment beams. A segment of the diaphragm was contoured and selected as our tracking structure. Successful monitoring sessions of the diaphragm were recorded. For some volunteers, diaphragm motion over 2 times larger than the initial motion has been observed during tracking. For the phantom study, we were able to register the 2D kV Fluoroscopy with the US images for position comparison. Conclusion: We demonstrated the feasibility of tracking the diaphragm using real-time ultrasound. Real-time tracking can help in identifying such irregularities in the respiratory motion which is correlated to tumor motion. We also showed the feasibility of acquiring 2D KV Fluoroscopy and registering the images with Ultrasound.« less

  13. MR-based motion correction for PET imaging using wired active MR microcoils in simultaneous PET-MR: Phantom study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Chuan; Brady, Thomas J.; El Fakhri, Georges

    2014-04-15

    Purpose: Artifacts caused by head motion present a major challenge in brain positron emission tomography (PET) imaging. The authors investigated the feasibility of using wired active MR microcoils to track head motion and incorporate the measured rigid motion fields into iterative PET reconstruction. Methods: Several wired active MR microcoils and a dedicated MR coil-tracking sequence were developed. The microcoils were attached to the outer surface of an anthropomorphic{sup 18}F-filled Hoffman phantom to mimic a brain PET scan. Complex rotation/translation motion of the phantom was induced by a balloon, which was connected to a ventilator. PET list-mode and MR tracking datamore » were acquired simultaneously on a PET-MR scanner. The acquired dynamic PET data were reconstructed iteratively with and without motion correction. Additionally, static phantom data were acquired and used as the gold standard. Results: Motion artifacts in PET images were effectively removed by wired active MR microcoil based motion correction. Motion correction yielded an activity concentration bias ranging from −0.6% to 3.4% as compared to a bias ranging from −25.0% to 16.6% if no motion correction was applied. The contrast recovery values were improved by 37%–156% with motion correction as compared to no motion correction. The image correlation (mean ± standard deviation) between the motion corrected (uncorrected) images of 20 independent noise realizations and static reference was R{sup 2} = 0.978 ± 0.007 (0.588 ± 0.010, respectively). Conclusions: Wired active MR microcoil based motion correction significantly improves brain PET quantitative accuracy and image contrast.« less

  14. MR-based motion correction for PET imaging using wired active MR microcoils in simultaneous PET-MR: Phantom study1

    PubMed Central

    Huang, Chuan; Ackerman, Jerome L.; Petibon, Yoann; Brady, Thomas J.; El Fakhri, Georges; Ouyang, Jinsong

    2014-01-01

    Purpose: Artifacts caused by head motion present a major challenge in brain positron emission tomography (PET) imaging. The authors investigated the feasibility of using wired active MR microcoils to track head motion and incorporate the measured rigid motion fields into iterative PET reconstruction. Methods: Several wired active MR microcoils and a dedicated MR coil-tracking sequence were developed. The microcoils were attached to the outer surface of an anthropomorphic 18F-filled Hoffman phantom to mimic a brain PET scan. Complex rotation/translation motion of the phantom was induced by a balloon, which was connected to a ventilator. PET list-mode and MR tracking data were acquired simultaneously on a PET-MR scanner. The acquired dynamic PET data were reconstructed iteratively with and without motion correction. Additionally, static phantom data were acquired and used as the gold standard. Results: Motion artifacts in PET images were effectively removed by wired active MR microcoil based motion correction. Motion correction yielded an activity concentration bias ranging from −0.6% to 3.4% as compared to a bias ranging from −25.0% to 16.6% if no motion correction was applied. The contrast recovery values were improved by 37%–156% with motion correction as compared to no motion correction. The image correlation (mean ± standard deviation) between the motion corrected (uncorrected) images of 20 independent noise realizations and static reference was R2 = 0.978 ± 0.007 (0.588 ± 0.010, respectively). Conclusions: Wired active MR microcoil based motion correction significantly improves brain PET quantitative accuracy and image contrast. PMID:24694141

  15. Fusing human and machine skills for remote robotic operations

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S.; Kim, Won S.; Venema, Steven C.; Bejczy, Antal K.

    1991-01-01

    The question of how computer assists can improve teleoperator trajectory tracking during both free and force-constrained motions is addressed. Computer graphics techniques which enable the human operator to both visualize and predict detailed 3D trajectories in real-time are reported. Man-machine interactive control procedures for better management of manipulator contact forces and positioning are also described. It is found that collectively, these novel advanced teleoperations techniques both enhance system performance and significantly reduce control problems long associated with teleoperations under time delay. Ongoing robotic simulations of the 1984 space shuttle Solar Maximum EVA Repair Mission are briefly described.

  16. Pursuit tracking and higher levels of skill development in the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    A model of the human pilot is offered for pursuit tracking tasks; the model encompasses an existing model for compensatory tracking. The central hypothesis in the development of this model states that those primary structural elements in the compensatory model responsible for the pilot's equalization capabilities remain intact in the pursuit model. In this latter case, effective low-frequency inversion of the controlled-element dynamics occurs by feeding-forward derived input rate through the equalization dynamics, with low-frequency phase droop minimized. The sharp reduction in low-frequency phase lag beyond that associated with the disappearance of phase droop is seen to accompany relatively low-gain feedback of vehicle output. The results of some recent motion cue research are discussed and interpreted in terms of the compensatory-pursuit display dichotomy. Tracking with input preview is discussed in a qualitative way. In terms of the model, preview is shown to demand no fundamental changes in structure or equalization and to allow the pilot to eliminate the effective time delays that accrue in the inversion of the controlled-element dynamics. Precognitive behavior is discussed, and a model that encompasses all the levels of skill development outlined in the successive organizations of perception theory is finally proposed.

  17. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    PubMed

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  18. Surrogate: A Body-Dexterous Mobile Manipulation Robot with a Tracked Base

    NASA Technical Reports Server (NTRS)

    Hebert, Paul (Inventor); Borders, James W. (Inventor); Hudson, Nicolas H. (Inventor); Kennedy, Brett A. (Inventor); Ma, Jeremy C. (Inventor); Bergh, Charles F. (Inventor)

    2018-01-01

    Robotics platforms in accordance with various embodiments of the invention can be utilized to implement highly dexterous robots capable of whole body motion. Robotics platforms in accordance with one embodiment of the invention include: a memory containing a whole body motion application; a spine, where the spine has seven degrees of freedom and comprises a spine actuator and three spine elbow joints that each include two spine joint actuators; at least one limb, where the at least one limb comprises a limb actuator and three limb elbow joints that each include two limb joint actuators; a tracked base; a connecting structure that connects the at least one limb to the spine; a second connecting structure that connects the spine to the tracked base; wherein the processor is configured by the whole body motion application to move the at least one limb and the spine to perform whole body motion.

  19. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  20. Development and clinical evaluation of a simple optical method to detect and measure patient external motion.

    PubMed

    Barbés, Benigno; Azcona, Juan Diego; Prieto, Elena; de Foronda, José Manuel; García, Marina; Burguete, Javier

    2015-09-08

    A simple and independent system to detect and measure the position of a number of points in space was devised and implemented. Its application aimed to detect patient motion during radiotherapy treatments, alert of out-of-tolerances motion, and record the trajectories for subsequent studies. The system obtains the 3D position of points in space, through its projections in 2D images recorded by two cameras. It tracks black dots on a white sticker placed on the surface of the moving object. The system was tested with linear displacements of a phantom, circular trajectories of a rotating disk, oscillations of an in-house phantom, and oscillations of a 4D phantom. It was also used to track 461 trajectories of points on the surface of patients during their radiotherapy treatments. Trajectories of several points were reproduced with accuracy better than 0.3 mm in the three spatial directions. The system was able to follow periodic motion with amplitudes lower than 0.5 mm, to follow trajectories of rotating points at speeds up to 11.5 cm/s, and to track accurately the motion of a respiratory phantom. The technique has been used to track the motion of patients during radiotherapy and to analyze that motion. The method is flexible. Its installation and calibration are simple and quick. It is easy to use and can be implemented at a very affordable price. Data collection does not involve any discomfort to the patient and does not delay the treatment, so the system can be used routinely in all treatments. It has an accuracy similar to that of other, more sophisticated, commercially available systems. It is suitable to implement a gating system or any other application requiring motion detection, such as 4D CT, MRI or PET.

  1. Minimization of Retinal Slip Cannot Explain Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Existing models assume that pursuit attempts a direct minimization of retinal image motion or "slip" (e.g. Robinson et al., 1986; Krauzlis & Weisberger, 1989). Using occluded line-figure stimuli, we have previously shown that humans can accurately pursue stimuli for which perfect tracking does not zero retinal slip (Neurologic ARCO). These findings are inconsistent with the standard control strategy of matching eye motion to a target-motion signal reconstructed by adding retinal slip and eye motion, but consistent with a visual front-end which estimates target motion via a global spatio-temporal integration for pursuit and perception. Another possible explanation is that pursuit simply attempts to minimize slip perpendicular to the segments (and neglects parallel "sliding" motion). To resolve this, 4 observers (3 naive) were asked to pursue the center of 2 types of stimuli with identical velocity-space descriptions and matched motion energy. The line-figure "diamond" stimulus was viewed through 2 invisible 3 deg-wide vertical apertures (38 cd/m2 equal to background) such that only the sinusoidal motion of 4 oblique line segments (44 cd/m2 was visible. The "cross" was identical except that the segments exchanged positions. Two trajectories (8's and infinity's) with 4 possible initial directions were randomly interleaved (1.25 cycles, 2.5s period, Ax = Ay = 1.4 deg). In 91% of trials, the diamond appeared rigid. Correspondingly, pursuit was vigorous (mean Again: 0.74) with a V/H aspect ratio approx. 1 (mean: 0.9). Despite a valid rigid solution, the cross however appeared rigid in 8% of trials. Correspondingly, pursuit was weaker (mean Hgain: 0.38) with an incorrect aspect ratio (mean: 1.5). If pursuit were just minimizing perpendicular slip, performance would be the same in both conditions.

  2. Influence of ultrasound speckle tracking strategies for motion and strain estimation.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Aja-Fernández, Santiago

    2016-08-01

    Speckle Tracking is one of the most prominent techniques used to estimate the regional movement of the heart based on ultrasound acquisitions. Many different approaches have been proposed, proving their suitability to obtain quantitative and qualitative information regarding myocardial deformation, motion and function assessment. New proposals to improve the basic algorithm usually focus on one of these three steps: (1) the similarity measure between images and the speckle model; (2) the transformation model, i.e. the type of motion considered between images; (3) the optimization strategies, such as the use of different optimization techniques in the transformation step or the inclusion of structural information. While many contributions have shown their good performance independently, it is not always clear how they perform when integrated in a whole pipeline. Every step will have a degree of influence over the following and hence over the final result. Thus, a Speckle Tracking pipeline must be analyzed as a whole when developing novel methods, since improvements in a particular step might be undermined by the choices taken in further steps. This work presents two main contributions: (1) We provide a complete analysis of the influence of the different steps in a Speckle Tracking pipeline over the motion and strain estimation accuracy. (2) The study proposes a methodology for the analysis of Speckle Tracking systems specifically designed to provide an easy and systematic way to include other strategies. We close the analysis with some conclusions and recommendations that can be used as an orientation of the degree of influence of the models for speckle, the transformation models, interpolation schemes and optimization strategies over the estimation of motion features. They can be further use to evaluate and design new strategy into a Speckle Tracking system. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Real-time ultrasound-tagging to track the 2D motion of the common carotid artery wall in vivo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahnd, Guillaume, E-mail: g.zahnd@erasmusmc.nl; Salles, Sébastien; Liebgott, Hervé

    2015-02-15

    Purpose: Tracking the motion of biological tissues represents an important issue in the field of medical ultrasound imaging. However, the longitudinal component of the motion (i.e., perpendicular to the beam axis) remains more challenging to extract due to the rather coarse resolution cell of ultrasound scanners along this direction. The aim of this study is to introduce a real-time beamforming strategy dedicated to acquire tagged images featuring a distinct pattern in the objective to ease the tracking. Methods: Under the conditions of the Fraunhofer approximation, a specific apodization function was applied to the received raw channel data, in real-time duringmore » image acquisition, in order to introduce a periodic oscillations pattern along the longitudinal direction of the radio frequency signal. Analytic signals were then extracted from the tagged images, and subpixel motion tracking of the intima–media complex was subsequently performed offline, by means of a previously introduced bidimensional analytic phase-based estimator. Results: The authors’ framework was applied in vivo on the common carotid artery from 20 young healthy volunteers and 6 elderly patients with high atherosclerosis risk. Cine-loops of tagged images were acquired during three cardiac cycles. Evaluated against reference trajectories manually generated by three experienced analysts, the mean absolute tracking error was 98 ± 84 μm and 55 ± 44 μm in the longitudinal and axial directions, respectively. These errors corresponded to 28% ± 23% and 13% ± 9% of the longitudinal and axial amplitude of the assessed motion, respectively. Conclusions: The proposed framework enables tagged ultrasound images of in vivo tissues to be acquired in real-time. Such unconventional beamforming strategy contributes to improve tracking accuracy and could potentially benefit to the interpretation and diagnosis of biomedical images.« less

  4. SU-G-JeP1-09: Evaluation of Transperineal Ultrasound Imaging as a Potential Solution for Target Tracking During Ablative Body Radiotherapy for Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Najafi, M; Han, B; Hancock, S

    Purpose: Prostate SABR is emerging as a clinically viable, potentially cost effective alternative to prostate IMRT but its adoption is contingent on providing solutions for accurate tracking during beam delivery. Our goal is to evaluate the performance of the Clarity Autoscan ultrasound monitoring system for inter-fractional prostate motion tracking in both phantoms and in-vivo. Methods: In-vivo evaluation was performed under IRB protocol to allow data collection in prostate patients treated with VMAT whereby prostate was imaged through the acoustic window of the perineum. The probe was placed before KV imaging and real-time tracking was started and continued until the endmore » of treatment. Initial absolute 3D positions of fiducials were estimated from KV images. Fiducial positions in MV images subsequently acquired during beam delivery were compared with predicted positions based on Clarity estimated motion. Results: Phantom studies with motion amplitudes of ±1.5, ±3, ±6 mm in lateral direction and ±2 mm in longitudinal direction resulted in tracking errors of −0.03 ± 0.3, −0.04 ± 0.6, −0.2 ± 0.9 mm, respectively, in lateral direction and −0.05 ± 0.30 mm in longitudinal direction. In phantom, measured and predicted fiducial positions in MV images were within 0.1 ± 0.6 mm. Four patients consented to participate in the study and data was acquired over a total of 140 fractions. MV imaging tracking was possible in about 75% of the time (due to occlusion of fiducials) compared to 100% with Clarity. Overall range of estimated motion by Clarity was 0 to 4.0 mm. In-vivo fiducial localization error was 1.2 ± 1.0 mm compared to 1.8 ± 1.9 mm if not taking Clarity estimated motion into account. Conclusion: Real-time transperineal ultrasound tracking reduces uncertainty in prostate position due to intrafractional motion. Research was supported by Elekta.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, Wouter, E-mail: wouter.crijns@uzleuven.be; Depuydt, Tom; Haustermans, Karin

    Purpose: To evaluate three different plan adaptation strategies using 3D film-stack dose measurements of both focal boost and hypofractionated prostate VMAT treatments. The adaptation strategies (a couch shift, geometric tracking, and dosimetric tracking) were applied for three realistic intrafraction prostate motions. Methods: A focal boost (35 × 2.2 and 35 × 2.7 Gy) and a hypofractionated (5 × 7.25 Gy) prostate VMAT plan were created for a heterogeneous phantom that allows for internal prostate motion. For these plans geometric tracking and dosimetric tracking were evaluated by ionization chamber (IC) point dose measurements (zero-D) and measurements using a stack of EBT3more » films (3D). The geometric tracking applied translations, rotations, and scaling of the MLC aperture in response to realistic prostate motions. The dosimetric tracking additionally corrected the monitor units to resolve variations due to difference in depth, tissue heterogeneity, and MLC-aperture. The tracking was based on the positions of four fiducial points only. The film measurements were compared to the gold standard (i.e., IC measurements) and the planned dose distribution. Additionally, the 3D measurements were converted to dose volume histograms, tumor control probability, and normal tissue complication probability parameters (DVH/TCP/NTCP) as a direct estimate of clinical relevance of the proposed tracking. Results: Compared to the planned dose distribution, measurements without prostate motion and tracking showed already a reduced homogeneity of the dose distribution. Adding prostate motion further blurs the DVHs for all treatment approaches. The clinical practice (no tracking) delivered the dose distribution inside the PTV but off target (CTV), resulting in boost dose errors up to 10%. The geometric and dosimetric tracking corrected the dose distribution’s position. Moreover, the dosimetric tracking could achieve the planned boost DVH, but not the DVH of the more homogeneously irradiated prostate. A drawback of both the geometric and dosimetric tracking was a reduced MLC blocking caused by the rotational component of the MLC aperture corrections. Because of the used CTV to PTV margins and the high doses in the considered fractionation schemes, the TCP differed less than 0.02 from the planned value for all targets and all correction methods. The rectal NTCP constraints, however, could not be realized using any of these methods. Conclusions: The geometric and dosimetric tracking use only a limited input, but they deposit the dose distribution with higher geometric accuracy than the clinical practice. The latter case has boost dose errors up to 10%. The increased accuracy has a modest impact [Δ(NT)CP < 0.02] because of the applied margins and the high dose levels used. To allow further margin reduction tracking methods are vital. The proposed methodology could further be improved by implementing a rotational correction using collimator rotations.« less

  6. Real-time video analysis for retail stores

    NASA Astrophysics Data System (ADS)

    Hassan, Ehtesham; Maurya, Avinash K.

    2015-03-01

    With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.

  7. Real-time motion compensation for EM bronchoscope tracking with smooth output - ex-vivo validation

    NASA Astrophysics Data System (ADS)

    Reichl, Tobias; Gergel, Ingmar; Menzel, Manuela; Hautmann, Hubert; Wegner, Ingmar; Meinzer, Hans-Peter; Navab, Nassir

    2012-02-01

    Navigated bronchoscopy provides benefits for endoscopists and patients, but accurate tracking information is needed. We present a novel real-time approach for bronchoscope tracking combining electromagnetic (EM) tracking, airway segmentation, and a continuous model of output. We augment a previously published approach by including segmentation information in the tracking optimization instead of image similarity. Thus, the new approach is feasible in real-time. Since the true bronchoscope trajectory is continuous, the output is modeled using splines and the control points are optimized with respect to displacement from EM tracking measurements and spatial relation to segmented airways. Accuracy of the proposed method and its components is evaluated on a ventilated porcine ex-vivo lung with respect to ground truth data acquired from a human expert. We demonstrate the robustness of the output of the proposed method against added artificial noise in the input data. Smoothness in terms of inter-frame distance is shown to remain below 2 mm, even when up to 5 mm of Gaussian noise are added to the input. The approach is shown to be easily extensible to include other measures like image similarity.

  8. Feedback attitude sliding mode regulation control of spacecraft using arm motion

    NASA Astrophysics Data System (ADS)

    Shi, Ye; Liang, Bin; Xu, Dong; Wang, Xueqian; Xu, Wenfu

    2013-09-01

    The problem of spacecraft attitude regulation based on the reaction of arm motion has attracted extensive attentions from both engineering and academic fields. Most of the solutions of the manipulator’s motion tracking problem just achieve asymptotical stabilization performance, so that these controllers cannot realize precise attitude regulation because of the existence of non-holonomic constraints. Thus, sliding mode control algorithms are adopted to stabilize the tracking error with zero transient process. Due to the switching effects of the variable structure controller, once the tracking error reaches the designed hyper-plane, it will be restricted to this plane permanently even with the existence of external disturbances. Thus, precise attitude regulation can be achieved. Furthermore, taking the non-zero initial tracking errors and chattering phenomenon into consideration, saturation functions are used to replace sign functions to smooth the control torques. The relations between the upper bounds of tracking errors and the controller parameters are derived to reveal physical characteristic of the controller. Mathematical models of free-floating space manipulator are established and simulations are conducted in the end. The results show that the spacecraft’s attitude can be regulated to the position as desired by using the proposed algorithm, the steady state error is 0.000 2 rad. In addition, the joint tracking trajectory is smooth, the joint tracking errors converges to zero quickly with a satisfactory continuous joint control input. The proposed research provides a feasible solution for spacecraft attitude regulation by using arm motion, and improves the precision of the spacecraft attitude regulation.

  9. Integrating motion, illumination, and structure in video sequences with applications in illumination-invariant tracking.

    PubMed

    Xu, Yilei; Roy-Chowdhury, Amit K

    2007-05-01

    In this paper, we present a theory for combining the effects of motion, illumination, 3D structure, albedo, and camera parameters in a sequence of images obtained by a perspective camera. We show that the set of all Lambertian reflectance functions of a moving object, at any position, illuminated by arbitrarily distant light sources, lies "close" to a bilinear subspace consisting of nine illumination variables and six motion variables. This result implies that, given an arbitrary video sequence, it is possible to recover the 3D structure, motion, and illumination conditions simultaneously using the bilinear subspace formulation. The derivation builds upon existing work on linear subspace representations of reflectance by generalizing it to moving objects. Lighting can change slowly or suddenly, locally or globally, and can originate from a combination of point and extended sources. We experimentally compare the results of our theory with ground truth data and also provide results on real data by using video sequences of a 3D face and the entire human body with various combinations of motion and illumination directions. We also show results of our theory in estimating 3D motion and illumination model parameters from a video sequence.

  10. Predictive Compensator Optimization for Head Tracking Lag in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Adelstein, Barnard D.; Jung, Jae Y.; Ellis, Stephen R.

    2001-01-01

    We examined the perceptual impact of plant noise parameterization for Kalman Filter predictive compensation of time delays intrinsic to head tracked virtual environments (VEs). Subjects were tested in their ability to discriminate between the VE system's minimum latency and conditions in which artificially added latency was then predictively compensated back to the system minimum. Two head tracking predictors were parameterized off-line according to cost functions that minimized prediction errors in (1) rotation, and (2) rotation projected into translational displacement with emphasis on higher frequency human operator noise. These predictors were compared with a parameterization obtained from the VE literature for cost function (1). Results from 12 subjects showed that both parameterization type and amount of compensated latency affected discrimination. Analysis of the head motion used in the parameterizations and the subsequent discriminability results suggest that higher frequency predictor artifacts are contributory cues for discriminating the presence of predictive compensation.

  11. User-assisted video segmentation system for visual communication

    NASA Astrophysics Data System (ADS)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  12. Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.

    PubMed

    Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik

    2011-01-01

    Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.

  13. Bilevel shared control for teleoperators

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A. (Inventor); Venkataraman, Subramanian T. (Inventor)

    1992-01-01

    A shared system is disclosed for robot control including integration of the human and autonomous input modalities for an improved control. Autonomously planned motion trajectories are modified by a teleoperator to track unmodelled target motions, while nominal teleoperator motions are modified through compliance to accommodate geometric errors autonomously in the latter. A hierarchical shared system intelligently shares control over a remote robot between the autonomous and teleoperative portions of an overall control system. Architecture is hierarchical, and consists of two levels. The top level represents the task level, while the bottom, the execution level. In space applications, the performance of pure teleoperation systems depend significantly on the communication time delays between the local and the remote sites. Selection/mixing matrices are provided with entries which reflect how each input's signals modality is weighted. The shared control minimizes the detrimental effects caused by these time delays between earth and space.

  14. Assessing Upper Extremity Motor Function in Practice of Virtual Activities of Daily Living

    PubMed Central

    Adams, Richard J.; Lichter, Matthew D.; Krepkovich, Eileen T.; Ellington, Allison; White, Marga; Diamond, Paul T.

    2015-01-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An Unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user’s avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman’s rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs. PMID:25265612

  15. Assessing upper extremity motor function in practice of virtual activities of daily living.

    PubMed

    Adams, Richard J; Lichter, Matthew D; Krepkovich, Eileen T; Ellington, Allison; White, Marga; Diamond, Paul T

    2015-03-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user's avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman's rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs.

  16. Three-Dimensional Tracking of Interfacial Hopping Diffusion

    NASA Astrophysics Data System (ADS)

    Wang, Dapeng; Wu, Haichao; Schwartz, Daniel K.

    2017-12-01

    Theoretical predictions have suggested that molecular motion at interfaces—which influences processes including heterogeneous catalysis, (bio)chemical sensing, lubrication and adhesion, and nanomaterial self-assembly—may be dominated by hypothetical "hops" through the adjacent liquid phase, where a diffusing molecule readsorbs after a given hop according to a probabilistic "sticking coefficient." Here, we use three-dimensional (3D) single-molecule tracking to explicitly visualize this process for human serum albumin at solid-liquid interfaces that exert varying electrostatic interactions on the biomacromolecule. Following desorption from the interface, a molecule experiences multiple unproductive surface encounters before readsorption. An average of approximately seven surface collisions is required for the repulsive surfaces, decreasing to approximately two and a half for surfaces that are more attractive. The hops themselves are also influenced by long-range interactions, with increased electrostatic repulsion causing hops of longer duration and distance. These findings explicitly demonstrate that interfacial diffusion is dominated by biased 3D Brownian motion involving bulk-surface coupling and that it can be controlled by influencing short- and long-range adsorbate-surface interactions.

  17. CME Research and Space Weather Support for the SECCHI Experiments on the STEREO Mission

    DTIC Science & Technology

    2014-01-14

    Corbett, ed., Cambridge Univ. Press (2010) Kahler, S.W. and D. F. Webb, "Tracking Nonradial Motions and Azimuthal Expansions of Interplanetary CME...Imaging and In-situ Data from LASCO, STEREO and SMEI", Bull. AAS, 41(2), p. 855, 2009. Kahler S. and D. Webb, "Tracking Nonradial Motions and

  18. Time-Lapse and Slow-Motion Tracking of Temperature Changes: Response Time of a Thermometer

    ERIC Educational Resources Information Center

    Moggio, L.; Onorato, P.; Gratton, L. M.; Oss, S.

    2017-01-01

    We propose the use of a smartphone based time-lapse and slow-motion video techniques together with tracking analysis as valuable tools for investigating thermal processes such as the response time of a thermometer. The two simple experimental activities presented here, suitable also for high school and undergraduate students, allow one to measure…

  19. Possibilities and Implications of Using a Motion-Tracking System in Physical Education

    ERIC Educational Resources Information Center

    Chow, Jia Yi; Tan, Clara Wee Keat; Lee, Miriam Chang Yi; Button, Chris

    2014-01-01

    Advances in technology have created new opportunities for enhanced delivery of teaching to improve the acquisition of game skills in physical education (PE). The availability of a motion-tracking system (i.e. the A-Eye), which determines positional information of students in a practice context, might offer a suitable technology to support…

  20. Tracking and Motion Analysis of Crack Propagations in Crystals for Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V; Duchaineau, M; Goldgof, D B

    2001-05-14

    This paper presents a quantitative analysis for a discovery in molecular dynamics. Recent simulations have shown that velocities of crack propagations in crystals under certain conditions can become supersonic, which is contrary to classical physics. In this research, they present a framework for tracking and motion analysis of crack propagations in crystals. It includes line segment extraction based on Canny edge maps, feature selection based on physical properties, and subsequent tracking of primary and secondary wavefronts. This tracking is completely automated; it runs in real time on three 834-image sequences using forty 250 MHZ processors. Results supporting physical observations aremore » presented in terms of both feature tracking and velocity analysis.« less

  1. TH-CD-207A-03: A Surface Deformation Driven Respiratory Model for Organ Motion Tracking in Lung Cancer Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H; Zhen, X; Zhou, L

    Purpose: To propose and validate a novel real-time surface-mesh-based internal organ-external surface motion and deformation tracking method for lung cancer radiotherapy. Methods: Deformation vector fields (DVFs) which characterizes the internal and external motion are obtained by registering the internal organ and tumor contours and external surface meshes to a reference phase in the 4D CT images using a recent developed local topology preserved non-rigid point matching algorithm (TOP). A composite matrix is constructed by combing the estimated internal and external DVFs. Principle component analysis (PCA) is then applied on the composite matrix to extract principal motion characteristics and finally yieldmore » the respiratory motion model parameters which correlates the internal and external motion and deformation. The accuracy of the respiratory motion model is evaluated using a 4D NURBS-based cardiac-torso (NCAT) synthetic phantom and three lung cancer cases. The center of mass (COM) difference is used to measure the tumor motion tracking accuracy, and the Dice’s coefficient (DC), percent error (PE) and Housdourf’s distance (HD) are used to measure the agreement between the predicted and ground truth tumor shape. Results: The mean COM is 0.84±0.49mm and 0.50±0.47mm for the phantom and patient data respectively. The mean DC, PE and HD are 0.93±0.01, 0.13±0.03 and 1.24±0.34 voxels for the phantom, and 0.91±0.04, 0.17±0.07 and 3.93±2.12 voxels for the three lung cancer patients, respectively. Conclusions: We have proposed and validate a real-time surface-mesh-based organ motion and deformation tracking method with an internal-external motion modeling. The preliminary results conducted on a synthetic 4D NCAT phantom and 4D CT images from three lung cancer cases show that the proposed method is reliable and accurate in tracking both the tumor motion trajectory and deformation, which can serve as a potential tool for real-time organ motion and deformation monitoring in lung cancer radiotherapy. This work is supported in part by grant from VARIAN MEDICAL SYSTEMS INC, the National Natural Science Foundation of China (no 81428019 and no 81301940), the Guangdong Natural Science Foundation (2015A030313302)and the 2015 Pearl River S&T Nova Program of Guangzhou (201506010096).« less

  2. A Novel Method for Tracking Individuals of Fruit Fly Swarms Flying in a Laboratory Flight Arena

    PubMed Central

    Cheng, Xi En; Qian, Zhi-Ming; Wang, Shuo Hong; Jiang, Nan; Guo, Aike; Chen, Yan Qiu

    2015-01-01

    The growing interest in studying social behaviours of swarming fruit flies, Drosophila melanogaster, has heightened the need for developing tools that provide quantitative motion data. To achieve such a goal, multi-camera three-dimensional tracking technology is the key experimental gateway. We have developed a novel tracking system for tracking hundreds of fruit flies flying in a confined cubic flight arena. In addition to the proposed tracking algorithm, this work offers additional contributions in three aspects: body detection, orientation estimation, and data validation. To demonstrate the opportunities that the proposed system offers for generating high-throughput quantitative motion data, we conducted experiments on five experimental configurations. We also performed quantitative analysis on the kinematics and the spatial structure and the motion patterns of fruit fly swarms. We found that there exists an asymptotic distance between fruit flies in swarms as the population density increases. Further, we discovered the evidence for repulsive response when the distance between fruit flies approached the asymptotic distance. Overall, the proposed tracking system presents a powerful method for studying flight behaviours of fruit flies in a three-dimensional environment. PMID:26083385

  3. Experimental measurements of motion cue effects on STOL approach tasks

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.; Stapleford, R. L.

    1972-01-01

    An experimental program to investigate the effects of motion cues on STOL approach is presented. The simulator used was the Six-Degrees-of-Freedom Motion Simulator (S.01) at Ames Research Center of NASA which has ?2.7 m travel longitudinally and laterally and ?2.5 m travel vertically. Three major experiments, characterized as tracking tasks, were conducted under fixed and moving base conditions: (1) A simulated IFR approach of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA), (2) a simulated VFR task with the same aircraft, and (3) a single-axis task having only linear acceleration as the motion cue. Tracking performance was measured in terms of the variances of several motion variables, pilot vehicle describing functions, and pilot commentary.

  4. Real-time monitoring and visualization of the multi-dimensional motion of an anisotropic nanoparticle

    NASA Astrophysics Data System (ADS)

    Go, Gi-Hyun; Heo, Seungjin; Cho, Jong-Hoi; Yoo, Yang-Seok; Kim, Minkwan; Park, Chung-Hyun; Cho, Yong-Hoon

    2017-03-01

    As interest in anisotropic particles has increased in various research fields, methods of tracking such particles have become increasingly desirable. Here, we present a new and intuitive method to monitor the Brownian motion of a nanowire, which can construct and visualize multi-dimensional motion of a nanowire confined in an optical trap, using a dual particle tracking system. We measured the isolated angular fluctuations and translational motion of the nanowire in the optical trap, and determined its physical properties, such as stiffness and torque constants, depending on laser power and polarization direction. This has wide implications in nanoscience and nanotechnology with levitated anisotropic nanoparticles.

  5. Brownian Motion of Boomerang Colloidal Particles

    NASA Astrophysics Data System (ADS)

    Wei, Qi-Huo; Konya, Andrew; Wang, Feng; Selinger, Jonathan V.; Sun, Kai; Chakrabarty, Ayan

    2014-03-01

    We present experimental and theoretical studies on the Brownian motion of boomerang colloidal particles confined between two glass plates. Our experimental observations show that the mean displacements are biased towards the center of hydrodynamic stress (CoH), and that the mean-square displacements exhibit a crossover from short-time faster to long-time slower diffusion with the short-time diffusion coefficients dependent on the points used for tracking. A model based on Langevin theory elucidates that these behaviors are ascribed to the superposition of two diffusive modes: the ellipsoidal motion of the CoH and the rotational motion of the tracking point with respect to the CoH.

  6. Turbulence characterization by studying laser beam wandering in a differential tracking motion setup

    NASA Astrophysics Data System (ADS)

    Pérez, Darío G.; Zunino, Luciano; Gulich, Damián; Funes, Gustavo; Garavaglia, Mario

    2009-09-01

    The Differential Image Motion Monitor (DIMM) is a standard and widely used instrument for astronomical seeing measurements. The seeing values are estimated from the variance of the differential image motion over two equal small pupils some distance apart. The twin pupils are usually cut in a mask on the entrance pupil of the telescope. As a differential method, it has the advantage of being immune to tracking errors, eliminating erratic motion of the telescope. The Differential Laser Tracking Motion (DLTM) is introduced here inspired by the same idea. Two identical laser beams are propagated through a path of air in turbulent motion, at the end of it their wander is registered by two position sensitive detectors-at a count of 800 samples per second. Time series generated from the difference of the pair of centroid laser beam coordinates is then analyzed using the multifractal detrended fluctuation analysis. Measurements were performed at the laboratory with synthetic turbulence: changing the relative separation of the beams for different turbulent regimes. The dependence, with respect to these parameters, and the robustness of our estimators is compared with the non-differential method. This method is an improvement with respect to previous approaches that study the beam wandering.

  7. Development of a four-axis moving phantom for patient-specific QA of surrogate signal-based tracking IMRT.

    PubMed

    Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Yamada, Masahiro; Takahashi, Kunio; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Kaneko, Shuji; Nakamura, Akira; Itasaka, Satoshi; Matsuo, Yukinori; Mizowaki, Takashi; Kokubo, Masaki; Hiraoka, Masahiro

    2016-12-01

    The purposes of this study were two-fold: first, to develop a four-axis moving phantom for patient-specific quality assurance (QA) in surrogate signal-based dynamic tumor-tracking intensity-modulated radiotherapy (DTT-IMRT), and second, to evaluate the accuracy of the moving phantom and perform patient-specific dosimetric QA of the surrogate signal-based DTT-IMRT. The four-axis moving phantom comprised three orthogonal linear actuators for target motion and a fourth one for surrogate motion. The positional accuracy was verified using four laser displacement gauges under static conditions (±40 mm displacements along each axis) and moving conditions [eight regular sinusoidal and fourth-power-of-sinusoidal patterns with peak-to-peak motion ranges (H) of 10-80 mm and a breathing period (T) of 4 s, and three irregular respiratory patterns with H of 1.4-2.5 mm in the left-right, 7.7-11.6 mm in the superior-inferior, and 3.1-4.2 mm in the anterior-posterior directions for the target motion, and 4.8-14.5 mm in the anterior-posterior direction for the surrogate motion, and T of 3.9-4.9 s]. Furthermore, perpendicularity, defined as the vector angle between any two axes, was measured using an optical measurement system. The reproducibility of the uncertainties in DTT-IMRT was then evaluated. Respiratory motions from 20 patients acquired in advance were reproduced and compared three-dimensionally with the originals. Furthermore, patient-specific dosimetric QAs of DTT-IMRT were performed for ten pancreatic cancer patients. The doses delivered to Gafchromic films under tracking and moving conditions were compared with those delivered under static conditions without dose normalization. Positional errors of the moving phantom under static and moving conditions were within 0.05 mm. The perpendicularity of the moving phantom was within 0.2° of 90°. The differences in prediction errors between the original and reproduced respiratory motions were -0.1 ± 0.1 mm for the lateral direction, -0.1 ± 0.2 mm for the superior-inferior direction, and -0.1 ± 0.1 mm for the anterior-posterior direction. The dosimetric accuracy showed significant improvements, of 92.9% ± 4.0% with tracking versus 69.8% ± 7.4% without tracking, in the passing rates of γ with the criterion of 3%/1 mm (p < 0.001). Although the dosimetric accuracy of IMRT without tracking showed a significant negative correlation with the 3D motion range of the target (r = - 0.59, p < 0.05), there was no significant correlation for DTT-IMRT (r = 0.03, p = 0.464). The developed four-axis moving phantom had sufficient accuracy to reproduce patient respiratory motions, allowing patient-specific QA of the surrogate signal-based DTT-IMRT under realistic conditions. Although IMRT without tracking decreased the dosimetric accuracy as the target motion increased, the DTT-IMRT achieved high dosimetric accuracy.

  8. Spatial Updating and the Maintenance of Visual Constancy

    PubMed Central

    Klier, Eliana M.; Angelaki, Dora E.

    2008-01-01

    Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze and evidence for such shifts have been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways. PMID:18786618

  9. Developing Articulated Human Models from Laser Scan Data for Use as Avatars in Real-Time Networked Virtual Environments

    DTIC Science & Technology

    2001-09-01

    structure model, motion model, physical model, and possibly many other characteristics depending on the application [Ref. 4]. While the film industry has...applications. The film industry relies on this technology almost exclusively, as it is highly reliable under controlled conditions. Since optical tracking...Wavefront. Maya has been used extensively in the film industry to provide lifelike animation, and is adept at handling 3D objects [Ref. 27]. Maya can

  10. Design and Implementation of the MARG Human Body Motion Tracking System

    DTIC Science & Technology

    2004-10-01

    7803-8463-6/041$20.00 ©:!004 IEEE 625 OPTOTRAK from Northern Digital Inc. is a typical example of a marker-based system [I 0]. Another is the...technique called tunneling is :used to overcome this problem. Tunneling is a software solution that runs on the end point routers/computers and allows...multicast packets to traverse the network by putting them into unicast packets. MUTUP overcomes the tunneling problem using shared memory in the

  11. Air Base Defense: Different Times Call for Different Methods

    DTIC Science & Technology

    2006-12-01

    small explosives in an attempt to drive U.S. forces from their territories. Many of these attacks were successful, resulting in the loss of human ...value.27 As the war rages on in Iraq, Matthew Levitt argues that the U.S. cannot afford to be distracted by the situation there, as terrorists may...serious and more difficult to defend.101 Air bases typically employ infrared and thermal imagers, security sentries, canine patrols and motion-tracking

  12. Real-time subpixel-accuracy tracking of single mitochondria in neurons reveals heterogeneous mitochondrial motion.

    PubMed

    Alsina, Adolfo; Lai, Wu Ming; Wong, Wai Kin; Qin, Xianan; Zhang, Min; Park, Hyokeun

    2017-11-04

    Mitochondria are essential for cellular survival and function. In neurons, mitochondria are transported to various subcellular regions as needed. Thus, defects in the axonal transport of mitochondria are related to the pathogenesis of neurodegenerative diseases, and the movement of mitochondria has been the subject of intense research. However, the inability to accurately track mitochondria with subpixel accuracy has hindered this research. Here, we report an automated method for tracking mitochondria based on the center of fluorescence. This tracking method, which is accurate to approximately one-tenth of a pixel, uses the centroid of an individual mitochondrion and provides information regarding the distance traveled between consecutive imaging frames, instantaneous speed, net distance traveled, and average speed. Importantly, this new tracking method enables researchers to observe both directed motion and undirected movement (i.e., in which the mitochondrion moves randomly within a small region, following a sub-diffusive motion). This method significantly improves our ability to analyze the movement of mitochondria and sheds light on the dynamic features of mitochondrial movement. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Investigating the effect of poly-l-lactic acid nanoparticles carrying hypericin on the flow-biased diffusive motion of HeLa cell organelles.

    PubMed

    Penjweini, Rozhin; Deville, Sarah; Haji Maghsoudi, Omid; Notelaers, Kristof; Ethirajan, Anitha; Ameloot, Marcel

    2017-07-19

    In this study, we investigate in human cervical epithelial HeLa cells the intracellular dynamics and the mutual interaction with the organelles of the poly-l-lactic acid nanoparticles (PLLA NPs) carrying the naturally occurring hydrophobic photosensitizer hypericin. Temporal and spatiotemporal image correlation spectroscopy was used for the assessment of the intracellular diffusion and directed motion of the nanocarriers by tracking the hypericin fluorescence. Using image cross-correlation spectroscopy and specific fluorescent labelling of endosomes, lysosomes and mitochondria, the NPs dynamics in association with the cell organelles was studied. Static colocalization experiments were interpreted according to the Manders' overlap coefficient. Nanoparticles associate with a small fraction of the whole-organelle population. The organelles moving with NPs exhibit higher directed motion compared to those moving without them. The rate of the directed motion drops substantially after the application of nocodazole. The random component of the organelle motions is not influenced by the NPs. Image correlation and cross-correlation spectroscopy are most appropriate to unravel the motion of the PLLA nanocarrier and to demonstrate that the rate of the directed motion of organelles is influenced by their interaction with the nanocarriers. Not all PLLA-hypericin NPs are associated with organelles. © 2017 Royal Pharmaceutical Society.

  14. Global Plate Motions Relative to the Hotspots since 48 Ma B.P. from Simultaneous Inversion of Hotspot Tracks in the Pacific, Indian, and Atlantic Oceans Constrained to Consistency with Known Relative Plate Motions

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Koivisto, E. A. L.

    2016-12-01

    A fundamental problem of global tectonics and paleomagnetism is determining what part of apparent polar wander is due to plate motion and what part is due to true polar wander. One approach for separating these is available if global hotspots can be used as a reference frame approximately fixed with respect to the deep mantle. Some other workers have used a hotspot reference based only on tracks in the Atlantic and Indian Oceans, and some have used reference frames with moving hotspots and many adjustable parameters. In sharp contrast to the assumptions made in these other works, our recent results demonstrate that there is no significant motion between the Pacific and Indo-Atlantic hotspots since 48 Ma B.P. (lower bound of zero and upper bound of 8-13 mm/yr [Koivisto et al., 2014]). Corrected methodologies combined with cumulative improvements in the age progression along the hotspot tracks, the geomagnetic reversal time scale, and relative plate reconstructions lead to significantly lower rates of motion between hotspots than found in prior studies. Building on our prior results, here we present a globally self-consistent estimate of plate motions relative to the hotspots for the past 48 million years from inversions to fit simultaneously the tracks of the Hawaiian, Louisville, Tristan da Cunha, Réunion, and Iceland hotspots constrained to consistency with known relative plate motions. Each finite rotation is estimated for an age corresponding to a key magnetic anomaly used in plate reconstructions. The new set of plate reconstructions presented here provides a firm basis for estimating absolute plate motions for the past 48 million years and, in particular, can be used to separate paleomagnetically determined apparent polar wander into the part due to plate motion and the part due to true polar wander. Implications for true polar wander since the age of the Hawaiian-Emperor Bend will be discussed.

  15. The new approach for infrared target tracking based on the particle filter algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Hang; Han, Hong-xia

    2011-08-01

    Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.

  16. The influence of ship motion of manual control skills

    NASA Technical Reports Server (NTRS)

    Mcleod, P.; Poulton, C.; Duross, H.; Lewis, W.

    1981-01-01

    The effects of ship motion on a range of typical manual control skills were examined on the Warren Spring ship motion simulator driven in heave, pitch, and roll by signals taken from the frigate HMS Avenger at 13 m/s (25 knots) into a force 4 wind. The motion produced a vertical r.m.s. acceleration of 0.024g, mostly between 0.1 and 0.3 Hz, with comparatively little pitch or roll. A task involving unsupported arm movements was seriously affected by the motion; a pursuit tracking task showed a reliable decrement although it was still performed reasonably well (pressure and free moving tracking controls were affected equally by the motion); a digit keying task requiring ballistic hand movements was unaffected. There was no evidence that these effects were caused by sea sickness. The differing response to motion of the different tasks, from virtual destruction to no effect, suggests that a major benefit could come from an attempt to design the man/control interface onboard ship around motion resistant tasks.

  17. Organ motion due to respiration: the state of the art and applications in interventional radiology and radiation oncology

    NASA Astrophysics Data System (ADS)

    Cleary, Kevin R.; Mulcahy, Maureen; Piyasena, Rohan; Zhou, Tong; Dieterich, Sonja; Xu, Sheng; Banovac, Filip; Wong, Kenneth H.

    2005-04-01

    Tracking organ motion due to respiration is important for precision treatments in interventional radiology and radiation oncology, among other areas. In interventional radiology, the ability to track and compensate for organ motion could lead to more precise biopsies for applications such as lung cancer screening. In radiation oncology, image-guided treatment of tumors is becoming technically possible, and the management of organ motion then becomes a major issue. This paper will review the state-of-the-art in respiratory motion and present two related clinical applications. Respiratory motion is an important topic for future work in image-guided surgery and medical robotics. Issues include how organs move due to respiration, how much they move, how the motion can be compensated for, and what clinical applications can benefit from respiratory motion compensation. Technology that can be applied for this purpose is now becoming available, and as that technology evolves, the subject will become an increasingly interesting and clinically valuable topic of research.

  18. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  19. An experimental study to investigate the effects of a motion tracking electromagnetic sensor during EEG data acquisition.

    PubMed

    Bashashati, Ali; Noureddin, Borna; Ward, Rabab K; Lawrence, Peter D; Birch, Gary E

    2006-03-01

    A power spectral analysis study was conducted to investigate the effects of using an electromagnetic motion tracking sensor on an electroencephalogram (EEG) recording system. The results showed that the sensors do not generate any consistent frequency component(s) in the power spectrum of the EEG in the frequencies of interest (0.1-55 Hz).

  20. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate.

    PubMed

    Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-12-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.

  1. Real-time tracking of liver motion and deformation using a flexible needle

    PubMed Central

    Lei, Peng; Moeslein, Fred; Wood, Bradford J.

    2012-01-01

    Purpose A real-time 3D image guidance system is needed to facilitate treatment of liver masses using radiofrequency ablation, for example. This study investigates the feasibility and accuracy of using an electromagnetically tracked flexible needle inserted into the liver to track liver motion and deformation. Methods This proof-of-principle study was conducted both ex vivo and in vivo with a CT scanner taking the place of an electromagnetic tracking system as the spatial tracker. Deformations of excised livers were artificially created by altering the shape of the stage on which the excised livers rested. Free breathing or controlled ventilation created deformations of live swine livers. The positions of the needle and test targets were determined through CT scans. The shape of the needle was reconstructed using data simulating multiple embedded electromagnetic sensors. Displacement of liver tissues in the vicinity of the needle was derived from the change in the reconstructed shape of the needle. Results The needle shape was successfully reconstructed with tracking information of two on-needle points. Within 30 mm of the needle, the registration error of implanted test targets was 2.4 ± 1.0 mm ex vivo and 2.8 ± 1.5 mm in vivo. Conclusion A practical approach was developed to measure the motion and deformation of the liver in real time within a region of interest. The approach relies on redesigning the often-used seeker needle to include embedded electromagnetic tracking sensors. With the nonrigid motion and deformation information of the tracked needle, a single- or multimodality 3D image of the intraprocedural liver, now clinically obtained with some delay, can be updated continuously to monitor intraprocedural changes in hepatic anatomy. This capability may be useful in radiofrequency ablation and other percutaneous ablative procedures. PMID:20700662

  2. SU-G-JeP1-14: Respiratory Motion Tracking Using Kinect V2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silverstein, E; Snyder, M

    Purpose: Investigate capability and accuracy of Kinect v2 camera for tracking respiratory motion to use as a tool during 4DCT or in combination with motion management during radiotherapy treatments. Methods: Utilizing the depth sensor on the Kinect as well as code written in C#, the respiratory motion of a patient was tracked by recording the depth (distance) values obtained at several points on the patient. Respiratory traces were also obtained using Varian’s RPM system, which traces the movement of a propriety marker placed on the patient’s abdomen, as well as an Anzai belt, which utilizes a pressure sensor to trackmore » respiratory motion. With the Kinect mounted 60 cm above the patient and pointing straight down, 11 breathing cycles were recorded with each system simultaneously. Relative displacement values during this time period were saved to file. While RPM and the Kinect give displacement values in distance units, the Anzai system has arbitrary units. As such, displacement for all three are displayed relative to the maximum value for the time interval from that system. Additional analysis was performed between RPM and Kinect for absolute displacement values. Results: Analysis of the data from all three systems indicates the relative motion obtained from the Kinect is both accurate and in sync with the data from RPM and Anzai. The absolute displacement data from RPM and Kinect show similar displacement values throughout the acquisition except for the depth obtained from the Kinect during maximum exhalation (largest distance from Kinect). Conclusion: By simply utilizing the depth data of specific points on a patient obtained from the Kinect, respiratory motion can be tracked and visualized with accuracy comparable to that of the Varian RPM and Anzai belt.« less

  3. Relative tracking control of constellation satellites considering inter-satellite link

    NASA Astrophysics Data System (ADS)

    Fakoor, M.; Amozegary, F.; Bakhtiari, M.; Daneshjou, K.

    2017-11-01

    In this article, two main issues related to the large-scale relative motion of satellites in the constellation are investigated to establish the Inter Satellite Link (ISL) which means the dynamic and control problems. In the section related to dynamic problems, a detailed and effective analytical solution is initially provided for the problem of satellite relative motion considering perturbations. The direct geometric method utilizing spherical coordinates is employed to achieve this solution. The evaluation of simulation shows that the solution obtained from the geometric method calculates the relative motion of the satellite with high accuracy. Thus, the proposed analytical solution will be applicable and effective. In the section related to control problems, the relative tracking control system between two satellites will be designed in order to establish a communication link between the satellites utilizing analytical solution for relative motion of satellites with respect to the reference trajectory. Sliding mode control approach is employed to develop the relative tracking control system for body to body and payload to payload tracking control. Efficiency of sliding mode control approach is compared with PID and LQR controllers. Two types of payload to payload tracking control considering with and without payload degree of freedom are designed and suitable one for practical ISL applications is introduced. Also, Fuzzy controller is utilized to eliminate the control input in the sliding mode controller.

  4. Low bandwidth eye tracker for scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Harvey, Zachary G.; Dubra, Alfredo; Cahill, Nathan D.; Lopez Alarcon, Sonia

    2012-02-01

    The incorporation of adaptive optics to scanning ophthalmoscopes (AOSOs) has allowed for in vivo, noninvasive imaging of the human rod and cone photoreceptor mosaics. Light safety restrictions and power limitations of the current low-coherence light sources available for imaging result in each individual raw image having a low signal to noise ratio (SNR). To date, the only approach used to increase the SNR has been to collect large number of raw images (N >50), to register them to remove the distortions due to involuntary eye motion, and then to average them. The large amplitude of involuntary eye motion with respect to the AOSO field of view (FOV) dictates that an even larger number of images need to be collected at each retinal location to ensure adequate SNR over the feature of interest. Compensating for eye motion during image acquisition to keep the feature of interest within the FOV could reduce the number of raw frames required per retinal feature, therefore significantly reduce the imaging time, storage requirements, post-processing times and, more importantly, subject's exposure to light. In this paper, we present a particular implementation of an AOSO, termed the adaptive optics scanning light ophthalmoscope (AOSLO) equipped with a simple eye tracking system capable of compensating for eye drift by estimating the eye motion from the raw frames and by using a tip-tilt mirror to compensate for it in a closed-loop. Multiple control strategies were evaluated to minimize the image distortion introduced by the tracker itself. Also, linear, quadratic and Kalman filter motion prediction algorithms were implemented and tested and tested using both simulated motion (sinusoidal motion with varying frequencies) and human subjects. The residual displacement of the retinal features was used to compare the performance of the different correction strategies and prediction methods.

  5. Gait recognition based on Gabor wavelets and modified gait energy image for human identification

    NASA Astrophysics Data System (ADS)

    Huang, Deng-Yuan; Lin, Ta-Wei; Hu, Wu-Chih; Cheng, Chih-Hsiang

    2013-10-01

    This paper proposes a method for recognizing human identity using gait features based on Gabor wavelets and modified gait energy images (GEIs). Identity recognition by gait generally involves gait representation, extraction, and classification. In this work, a modified GEI convolved with an ensemble of Gabor wavelets is proposed as a gait feature. Principal component analysis is then used to project the Gabor-wavelet-based gait features into a lower-dimension feature space for subsequent classification. Finally, support vector machine classifiers based on a radial basis function kernel are trained and utilized to recognize human identity. The major contributions of this paper are as follows: (1) the consideration of the shadow effect to yield a more complete segmentation of gait silhouettes; (2) the utilization of motion estimation to track people when walkers overlap; and (3) the derivation of modified GEIs to extract more useful gait information. Extensive performance evaluation shows a great improvement of recognition accuracy due to the use of shadow removal, motion estimation, and gait representation using the modified GEIs and Gabor wavelets.

  6. Dynamics of a railway vehicle on a laterally disturbed track

    NASA Astrophysics Data System (ADS)

    Christiansen, Lasse Engbo; True, Hans

    2018-02-01

    In this article a theoretical investigation of the dynamics of a railway bogie running on a tangent track with a periodic disturbance of the lateral track geometry is presented. The dynamics is computed for two values of the speed of the vehicle in combination with different values of the wavelength and amplitude of the disturbance. Depending on the combinations of the speed, the wavelength and the amplitude, straight line forward motion, different modes of symmetric or asymmetric periodic oscillations or aperiodic motions, which are presumably chaotic, are found. Statistical methods are applied for the investigation. In the case of sinusoidal oscillations they provide information about the phase shift between the different variables and the amplitudes of the oscillations. In the case of an aperiodic motion the statistical measures indicate some non-smooth transitions.

  7. Unification of automatic target tracking and automatic target recognition

    NASA Astrophysics Data System (ADS)

    Schachter, Bruce J.

    2014-06-01

    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  8. The use of vestibular models for design and evaluation of flight simulator motion

    NASA Technical Reports Server (NTRS)

    Bussolari, Steven R.; Young, Laurence R.; Lee, Alfred T.

    1989-01-01

    Quantitative models for the dynamics of the human vestibular system are applied to the design and evaluation of flight simulator platform motion. An optimal simulator motion control algorithm is generated to minimize the vector difference between perceived spatial orientation estimated in flight and in simulation. The motion controller has been implemented on the Vertical Motion Simulator at NASA Ames Research Center and evaluated experimentally through measurement of pilot performance and subjective rating during VTOL aircraft simulation. In general, pilot performance in a longitudinal tracking task (formation flight) did not appear to be sensitive to variations in platform motion condition as long as motion was present. However, pilot assessment of motion fidelity by means of a rating scale designed for this purpose, were sensitive to motion controller design. Platform motion generated with the optimal motion controller was found to be generally equivalent to that generated by conventional linear crossfeed washout. The vestibular models are used to evaluate the motion fidelity of transport category aircraft (Boeing 727) simulation in a pilot performance and simulator acceptability study at the Man-Vehicle Systems Research Facility at NASA Ames Research Center. Eighteen airline pilots, currently flying B-727, were given a series of flight scenarios in the simulator under various conditions of simulator motion. The scenarios were chosen to reflect the flight maneuvers that these pilots might expect to be given during a routine pilot proficiency check. Pilot performance and subjective rating of simulator fidelity was relatively insensitive to the motion condition, despite large differences in the amplitude of motion provided. This lack of sensitivity may be explained by means of the vestibular models, which predict little difference in the modeled motion sensations of the pilots when different motion conditions are imposed.

  9. Improved Visual Cognition through Stroboscopic Training

    PubMed Central

    Appelbaum, L. Gregory; Schroeder, Julia E.; Cain, Matthew S.; Mitroff, Stephen R.

    2011-01-01

    Humans have a remarkable capacity to learn and adapt, but surprisingly little research has demonstrated generalized learning in which new skills and strategies can be used flexibly across a range of tasks and contexts. In the present work we examined whether generalized learning could result from visual–motor training under stroboscopic visual conditions. Individuals were assigned to either an experimental condition that trained with stroboscopic eyewear or to a control condition that underwent identical training with non-stroboscopic eyewear. The training consisted of multiple sessions of athletic activities during which participants performed simple drills such as throwing and catching. To determine if training led to generalized benefits, we used computerized measures to assess perceptual and cognitive abilities on a variety of tasks before and after training. Computer-based assessments included measures of visual sensitivity (central and peripheral motion coherence thresholds), transient spatial attention (a useful field of view – dual task paradigm), and sustained attention (multiple-object tracking). Results revealed that stroboscopic training led to significantly greater re-test improvement in central visual field motion sensitivity and transient attention abilities. No training benefits were observed for peripheral motion sensitivity or peripheral transient attention abilities, nor were benefits seen for sustained attention during multiple-object tracking. These findings suggest that stroboscopic training can effectively improve some, but not all aspects of visual perception and attention. PMID:22059078

  10. MO-FG-BRA-07: Intrafractional Motion Effect Can Be Minimized in Tomotherapy Stereotactic Body Radiotherapy (SBRT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, A; Chang, S; Matney, J

    2016-06-15

    Purpose: Tomotherapy has unique challenges in handling intrafractional motion compared to conventional LINAC. In this study, we analyzed the impact of intrafractional motion on cumulative dosimetry using actual patient motion data and investigated real time jaw/MLC compensation approaches to minimize the motion-induced dose discrepancy in Tomotherapy SBRT treatment. Methods: Intrafractional motion data recorded in two CyberKnife lung treatment cases through fiducial tracking and two LINAC prostate cases through Calypso tracking were used in this study. For each treatment site, one representative case has an average motion (6mm) and one has a large motion (10mm for lung and 15mm for prostate).more » The cases were re-planned on Tomotherapy for SBRT. Each case was planned with 3 different jaw settings: 1cm static, 2.5cm dynamic, and 5cm dynamic. 4D dose accumulation software was developed to compute dose with the recorded motions and theoretically compensate motions by modifying original jaw and MLC to track the trajectory of the tumor. Results: PTV coverage in Tomotherapy SBRT for patients with intrafractional motion depends on motion type, amplitude and plan settings. For the prostate patient with large motion, PTV coverage changed from 97.2% (motion-free) to 47.1% (target motion-included), 96.6% to 58.5% and 96.3% to 97.8% for the 1cm static jaw, 2.5cm dynamic jaw and 5cm dynamic jaw setting, respectively. For the lung patient with large motion, PTV coverage discrepancies showed a similar trend of change. When the jaw and MLC compensation program was engaged, the motion compromised PTV coverage was recovered back to >95% for all cases and plans. All organs at risk (OAR) were spared with < 5% increase from original motion-free plans. Conclusion: Tomotherapy SBRT is less motion-impacted when 5cm dynamic jaw is used. Once the motion pattern is known, the jaw and MLC compensation program can largely minimize the compromised target coverage and OAR sparing.« less

  11. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation.

    PubMed

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  12. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young-Keun, E-mail: ykkim@handong.edu; Kim, Kyung-Soo

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-basedmore » sensor, the system is expected to be highly robust to sea weather conditions.« less

  13. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    NASA Astrophysics Data System (ADS)

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-01

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  14. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed Central

    Wouda, Frank J.; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H.

    2016-01-01

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances. PMID:27983676

  15. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed

    Wouda, Frank J; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H

    2016-12-15

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7 ∘ . Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.

  16. Motion tracking in the liver: Validation of a method based on 4D ultrasound using a nonrigid registration technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayan, Sinara, E-mail: sinara.vijayan@ntnu.no; Klein, Stefan; Hofstad, Erlend Fagertun

    Purpose: Treatments like radiotherapy and focused ultrasound in the abdomen require accurate motion tracking, in order to optimize dosage delivery to the target and minimize damage to critical structures and healthy tissues around the target. 4D ultrasound is a promising modality for motion tracking during such treatments. In this study, the authors evaluate the accuracy of motion tracking in the liver based on deformable registration of 4D ultrasound images. Methods: The offline analysis was performed using a nonrigid registration algorithm that was specifically designed for motion estimation from dynamic imaging data. The method registers the entire 4D image data sequencemore » in a groupwise optimization fashion, thus avoiding a bias toward a specifically chosen reference time point. Three healthy volunteers were scanned over several breathing cycles (12 s) from three different positions and angles on the abdomen; a total of nine 4D scans for the three volunteers. Well-defined anatomic landmarks were manually annotated in all 96 time frames for assessment of the automatic algorithm. The error of the automatic motion estimation method was compared with interobserver variability. The authors also performed experiments to investigate the influence of parameters defining the deformation field flexibility and evaluated how well the method performed with a lower temporal resolution in order to establish the minimum frame rate required for accurate motion estimation. Results: The registration method estimated liver motion with an error of 1 mm (75% percentile over all datasets), which was lower than the interobserver variability of 1.4 mm. The results were only slightly dependent on the degrees of freedom of the deformation model. The registration error increased to 2.8 mm with an eight times lower temporal resolution. Conclusions: The authors conclude that the methodology was able to accurately track the motion of the liver in the 4D ultrasound data. The authors believe that the method has potential in interventions on moving abdominal organs such as MR or ultrasound guided focused ultrasound therapy and radiotherapy, pending the method is enabled to run in real-time. The data and the annotations used for this study are made publicly available for those who would like to test other methods on 4D liver ultrasound data.« less

  17. A Kinect based intelligent e-rehabilitation system in physical therapy.

    PubMed

    Gal, Norbert; Andrei, Diana; Nemeş, Dan Ion; Nădăşan, Emanuela; Stoicu-Tivadar, Vasile

    2015-01-01

    This paper presents an intelligent Kinect and fuzzy inference system based e-rehabilitation system. The Kinect can detect the posture and motion of the patients while the fuzzy inference system can interpret the acquired data on the cognitive level. The system is capable to assess the initial posture and motion ranges of 20 joints. Using angles to describe the motion of the joints, exercise patterns can be developed for each patient. Using the exercise descriptors the fuzzy inference system can track the patient and deliver real-time feedback to maximize the efficiency of the rehabilitation. The first laboratory tests confirm the utility of this system for the initial posture detection, motion range and exercise tracking.

  18. Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging

    PubMed Central

    Cho, Youngjun; Julier, Simon J.; Marquardt, Nicolai; Bianchi-Berthouze, Nadia

    2017-01-01

    The ability to monitor the respiratory rate, one of the vital signs, is extremely important for the medical treatment, healthcare and fitness sectors. In many situations, mobile methods, which allow users to undertake everyday activities, are required. However, current monitoring systems can be obtrusive, requiring users to wear respiration belts or nasal probes. Alternatively, contactless digital image sensor based remote-photoplethysmography (PPG) can be used. However, remote PPG requires an ambient source of light, and does not work properly in dark places or under varying lighting conditions. Recent advances in thermographic systems have shrunk their size, weight and cost, to the point where it is possible to create smart-phone based respiration rate monitoring devices that are not affected by lighting conditions. However, mobile thermal imaging is challenged in scenes with high thermal dynamic ranges (e.g. due to the different environmental temperature distributions indoors and outdoors). This challenge is further amplified by general problems such as motion artifacts and low spatial resolution, leading to unreliable breathing signals. In this paper, we propose a novel and robust approach for respiration tracking which compensates for the negative effects of variations in the ambient temperature and motion artifacts and can accurately extract breathing rates in highly dynamic thermal scenes. The approach is based on tracking the nostril of the user and using local temperature variations to infer inhalation and exhalation cycles. It has three main contributions. The first is a novel Optimal Quantization technique which adaptively constructs a color mapping of absolute temperature to improve segmentation, classification and tracking. The second is the Thermal Gradient Flow method that computes thermal gradient magnitude maps to enhance the accuracy of the nostril region tracking. Finally, we introduce the Thermal Voxel method to increase the reliability of the captured respiration signals compared to the traditional averaging method. We demonstrate the extreme robustness of our system to track the nostril-region and measure the respiratory rate by evaluating it during controlled respiration exercises in high thermal dynamic scenes (e.g. strong correlation (r = 0.9987) with the ground truth from the respiration-belt sensor). We also demonstrate how our algorithm outperformed standard algorithms in settings with different amounts of environmental thermal changes and human motion. We open the tracked ROI sequences of the datasets collected for these studies (i.e. under both controlled and unconstrained real-world settings) to the community to foster work in this area. PMID:29082079

  19. Robust tracking of respiratory rate in high-dynamic range scenes using mobile thermal imaging.

    PubMed

    Cho, Youngjun; Julier, Simon J; Marquardt, Nicolai; Bianchi-Berthouze, Nadia

    2017-10-01

    The ability to monitor the respiratory rate, one of the vital signs, is extremely important for the medical treatment, healthcare and fitness sectors. In many situations, mobile methods, which allow users to undertake everyday activities, are required. However, current monitoring systems can be obtrusive, requiring users to wear respiration belts or nasal probes. Alternatively, contactless digital image sensor based remote-photoplethysmography (PPG) can be used. However, remote PPG requires an ambient source of light, and does not work properly in dark places or under varying lighting conditions. Recent advances in thermographic systems have shrunk their size, weight and cost, to the point where it is possible to create smart-phone based respiration rate monitoring devices that are not affected by lighting conditions. However, mobile thermal imaging is challenged in scenes with high thermal dynamic ranges (e.g. due to the different environmental temperature distributions indoors and outdoors). This challenge is further amplified by general problems such as motion artifacts and low spatial resolution, leading to unreliable breathing signals. In this paper, we propose a novel and robust approach for respiration tracking which compensates for the negative effects of variations in the ambient temperature and motion artifacts and can accurately extract breathing rates in highly dynamic thermal scenes. The approach is based on tracking the nostril of the user and using local temperature variations to infer inhalation and exhalation cycles. It has three main contributions. The first is a novel Optimal Quantization technique which adaptively constructs a color mapping of absolute temperature to improve segmentation, classification and tracking. The second is the Thermal Gradient Flow method that computes thermal gradient magnitude maps to enhance the accuracy of the nostril region tracking. Finally, we introduce the Thermal Voxel method to increase the reliability of the captured respiration signals compared to the traditional averaging method. We demonstrate the extreme robustness of our system to track the nostril-region and measure the respiratory rate by evaluating it during controlled respiration exercises in high thermal dynamic scenes (e.g. strong correlation (r = 0.9987) with the ground truth from the respiration-belt sensor). We also demonstrate how our algorithm outperformed standard algorithms in settings with different amounts of environmental thermal changes and human motion. We open the tracked ROI sequences of the datasets collected for these studies (i.e. under both controlled and unconstrained real-world settings) to the community to foster work in this area.

  20. Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.

    PubMed

    Souto, David; Kerzel, Dirk

    2013-02-06

    Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.

Top