Sample records for realtime 3d tracking

  1. A Protocol for Real-time 3D Single Particle Tracking.

    PubMed

    Hou, Shangguo; Welsher, Kevin

    2018-01-03

    Real-time three-dimensional single particle tracking (RT-3D-SPT) has the potential to shed light on fast, 3D processes in cellular systems. Although various RT-3D-SPT methods have been put forward in recent years, tracking high speed 3D diffusing particles at low photon count rates remains a challenge. Moreover, RT-3D-SPT setups are generally complex and difficult to implement, limiting their widespread application to biological problems. This protocol presents a RT-3D-SPT system named 3D Dynamic Photon Localization Tracking (3D-DyPLoT), which can track particles with high diffusive speed (up to 20 µm 2 /s) at low photon count rates (down to 10 kHz). 3D-DyPLoT employs a 2D electro-optic deflector (2D-EOD) and a tunable acoustic gradient (TAG) lens to drive a single focused laser spot dynamically in 3D. Combined with an optimized position estimation algorithm, 3D-DyPLoT can lock onto single particles with high tracking speed and high localization precision. Owing to the single excitation and single detection path layout, 3D-DyPLoT is robust and easy to set up. This protocol discusses how to build 3D-DyPLoT step by step. First, the optical layout is described. Next, the system is calibrated and optimized by raster scanning a 190 nm fluorescent bead with the piezoelectric nanopositioner. Finally, to demonstrate real-time 3D tracking ability, 110 nm fluorescent beads are tracked in water.

  2. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    PubMed

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  3. Strain measurement of abdominal aortic aneurysm with real-time 3D ultrasound speckle tracking.

    PubMed

    Bihari, P; Shelke, A; Nwe, T H; Mularczyk, M; Nelson, K; Schmandra, T; Knez, P; Schmitz-Rixen, T

    2013-04-01

    Abdominal aortic aneurysm rupture is caused by mechanical vascular tissue failure. Although mechanical properties within the aneurysm vary, currently available ultrasound methods assess only one cross-sectional segment of the aorta. This study aims to establish real-time 3-dimensional (3D) speckle tracking ultrasound to explore local displacement and strain parameters of the whole abdominal aortic aneurysm. Validation was performed on a silicone aneurysm model, perfused in a pulsatile artificial circulatory system. Wall motion of the silicone model was measured simultaneously with a commercial real-time 3D speckle tracking ultrasound system and either with laser-scan micrometry or with video photogrammetry. After validation, 3D ultrasound data were collected from abdominal aortic aneurysms of five patients and displacement and strain parameters were analysed. Displacement parameters measured in vitro by 3D ultrasound and laser scan micrometer or video analysis were significantly correlated at pulse pressures between 40 and 80 mmHg. Strong local differences in displacement and strain were identified within the aortic aneurysms of patients. Local wall strain of the whole abdominal aortic aneurysm can be analysed in vivo with real-time 3D ultrasound speckle tracking imaging, offering the prospect of individual non-invasive rupture risk analysis of abdominal aortic aneurysms. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  4. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) imagesmore » at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based

  5. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  6. Sequence design and software environment for real-time navigation of a wireless ferromagnetic device using MRI system and single echo 3D tracking.

    PubMed

    Chanu, A; Aboussouan, E; Tamaz, S; Martel, S

    2006-01-01

    Software architecture for the navigation of a ferromagnetic untethered device in a 1D and 2D phantom environment is briefly described. Navigation is achieved using the real-time capabilities of a Siemens 1.5 T Avanto MRI system coupled with a dedicated software environment and a specially developed 3D tracking pulse sequence. Real-time control of the magnetic core is executed through the implementation of a simple PID controller. 1D and 2D experimental results are presented.

  7. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brix, Lau, E-mail: lau.brix@stab.rm.dk; Ringgaard, Steffen; Sørensen, Thomas Sangild

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (ormore » tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial

  8. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  9. PRIMAS: a real-time 3D motion-analysis system

    NASA Astrophysics Data System (ADS)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  10. Automatic respiration tracking for radiotherapy using optical 3D camera

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  11. Management of three-dimensional intrafraction motion through real-time DMLC tracking.

    PubMed

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-05-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion.

  12. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions

    NASA Astrophysics Data System (ADS)

    Wiersma, R. D.; Riaz, N.; Dieterich, Sonja; Suh, Yelin; Xing, L.

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have <=1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness of

  13. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.

    2008-12-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time

  14. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    PubMed

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  15. Handheld real-time volumetric 3-D gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai

    2017-06-01

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  16. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    PubMed

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  18. Real-time visual tracking of less textured three-dimensional objects on mobile platforms

    NASA Astrophysics Data System (ADS)

    Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2012-12-01

    Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.

  19. A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.

    PubMed

    Mung, Jay; Vignon, Francois; Jain, Ameet

    2011-01-01

    In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact.

  20. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  1. SU-E-J-240: Development of a Novel 4D MRI Sequence for Real-Time Liver Tumor Tracking During Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuang, L; Burmeister, J; Ye, Y

    2015-06-15

    Purpose: To develop a Novel 4D MRI Technique that is feasible for realtime liver tumor tracking during radiotherapy. Methods: A volunteer underwent an abdominal 2D fast EPI coronal scan on a 3.0T MRI scanner (Siemens Inc., Germany). An optimal set of parameters was determined based on image quality and scan time. A total of 23 slices were scanned to cover the whole liver in the test scan. For each scan position, the 2D images were retrospectively sorted into multiple phases based on breathing signal extracted from the images. Consequently the 2D slices with same phase numbers were stacked to formmore » one 3D image. Multiple phases of 3D images formed the 4D MRI sequence representing one breathing cycle. Results: The optimal set of scan parameters were: TR= 57ms, TE= 19ms, FOV read= 320mm and flip angle= 30°, which resulted in a total scan time of 14s for 200 frames (FMs) per slice and image resolution of (2.5mm,2.5mm,5.0mm) in three directions. Ten phases of 3D images were generated, each of which had 23 slices. Based on our test scan, only 100FMs were necessary for the phase sorting process which may lower the scan time to 7s/100FMs/slice. For example, only 5 slices/35s are necessary for a 4D MRI scan to cover liver tumor size ≤ 2cm leading to the possibility of tumor trajectory tracking every 35s during treatment. Conclusion: The novel 4D MRI technique we developed can reconstruct a 4D liver MRI sequence representing one breathing cycle (7s/ slice) without an external monitor. This technique can potentially be used for real-time liver tumor tracking during radiotherapy.« less

  2. Ames Lab 101: Real-Time 3D Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Song

    2010-08-02

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  3. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2017-12-22

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  4. Lagrangian 3D tracking of fluorescent microscopic objects in motion

    NASA Astrophysics Data System (ADS)

    Darnige, T.; Figueroa-Morales, N.; Bohec, P.; Lindner, A.; Clément, E.

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  5. Lagrangian 3D tracking of fluorescent microscopic objects in motion.

    PubMed

    Darnige, T; Figueroa-Morales, N; Bohec, P; Lindner, A; Clément, E

    2017-05-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in microfluidic devices. The system is based on real-time image processing, determining the displacement of a x, y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displacement of a piezo mover keeping the moving object in focus. Track coordinates of the object with respect to the microfluidic device as well as images of the object are obtained at a frequency of several tenths of Hertz. This device is particularly well adapted to obtain trajectories of motile micro-organisms in microfluidic devices with or without flow.

  6. Design and Performance Evaluation on Ultra-Wideband Time-Of-Arrival 3D Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Dusl, John

    2012-01-01

    A three-dimensional (3D) Ultra-Wideband (UWB) Time--of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide and felt upset. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested in the Wireless Habitat Testbed which simulates the ISS module environment. In this presentation, we discuss the 3D TOA tracking algorithm and the performance evaluation based on different tracking baseline configurations. The simulation results show that two configurations of the tracking baseline are feasible. With 100 picoseconds standard deviation (STD) of TOA estimates, the average tracking error 0.2392 feet (about 7 centimeters) can be achieved for configuration Twisted Rectangle while the average tracking error 0.9183 feet (about 28 centimeters) can be achieved for configuration Slightly-Twisted Top Rectangle . The tracking accuracy can be further improved with the improvement of the STD of TOA estimates. With 10 picoseconds STD of TOA estimates, the average tracking error 0.0239 feet (less than 1 centimeter) can be achieved for configuration "Twisted Rectangle".

  7. Multiview 3-D Echocardiography Fusion with Breath-Hold Position Tracking Using an Optical Tracking System.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  8. Tracked 3D ultrasound in radio-frequency liver ablation

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Fichtinger, Gabor; Taylor, Russell H.; Choti, Michael A.

    2003-05-01

    Recent studies have shown that radio frequency (RF) ablation is a simple, safe and potentially effective treatment for selected patients with liver metastases. Despite all recent therapeutic advancements, however, intra-procedural target localization and precise and consistent placement of the tissue ablator device are still unsolved problems. Various imaging modalities, including ultrasound (US) and computed tomography (CT) have been tried as guidance modalities. Transcutaneous US imaging, due to its real-time nature, may be beneficial in many cases, but unfortunately, fails to adequately visualize the tumor in many cases. Intraoperative or laparoscopic US, on the other hand, provides improved visualization and target imaging. This paper describes a system for computer-assisted RF ablation of liver tumors, combining navigational tracking of a conventional imaging ultrasound probe to produce 3D ultrasound imaging with a tracked RF ablation device supported by a passive mechanical arm and spatially registered to the ultrasound volume.

  9. Real-time 3D motion tracking for small animal brain PET

    NASA Astrophysics Data System (ADS)

    Kyme, A. Z.; Zhou, V. W.; Meikle, S. R.; Fulton, R. R.

    2008-05-01

    High-resolution positron emission tomography (PET) imaging of conscious, unrestrained laboratory animals presents many challenges. Some form of motion correction will normally be necessary to avoid motion artefacts in the reconstruction. The aim of the current work was to develop and evaluate a motion tracking system potentially suitable for use in small animal PET. This system is based on the commercially available stereo-optical MicronTracker S60 which we have integrated with a Siemens Focus-220 microPET scanner. We present measured performance limits of the tracker and the technical details of our implementation, including calibration and synchronization of the system. A phantom study demonstrating motion tracking and correction was also performed. The system can be calibrated with sub-millimetre accuracy, and small lightweight markers can be constructed to provide accurate 3D motion data. A marked reduction in motion artefacts was demonstrated in the phantom study. The techniques and results described here represent a step towards a practical method for rigid-body motion correction in small animal PET. There is scope to achieve further improvements in the accuracy of synchronization and pose measurements in future work.

  10. Feasibility study: real-time 3-D ultrasound imaging of the brain.

    PubMed

    Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D

    2004-10-01

    We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.

  11. Real-Time Tracking of Knee Adduction Moment in Patients with Knee Osteoarthritis

    PubMed Central

    Kang, Sang Hoon; Lee, Song Joo; Zhang, Li-Qun

    2014-01-01

    Background The external knee adduction moment (EKAM) is closely associated with the presence, progression, and severity of knee osteoarthritis (OA). However, there is a lack of convenient and practical method to estimate and track in real-time the EKAM of patients with knee OA for clinical evaluation and gait training, especially outside of gait laboratories. New Method A real-time EKAM estimation method was developed and applied to track and investigate the EKAM and other knee moments during stepping on an elliptical trainer in both healthy subjects and a patient with knee OA. Results Substantial changes were observed in the EKAM and other knee moments during stepping in the patient with knee OA. Comparison with Existing Method(s) This is the first study to develop and test feasibility of real-time tracking method of the EKAM on patients with knee OA using 3-D inverse dynamics. Conclusions The study provides us an accurate and practical method to evaluate in real-time the critical EKAM associated with knee OA, which is expected to help us to diagnose and evaluate patients with knee OA and provide the patients with real-time EKAM feedback rehabilitation training. PMID:24361759

  12. Electrically tunable lens speeds up 3D orbital tracking

    PubMed Central

    Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico

    2015-01-01

    3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037

  13. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  14. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    PubMed

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  15. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate.

    PubMed

    Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-12-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.

  16. A Review on Real-Time 3D Ultrasound Imaging Technology

    PubMed Central

    Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067

  17. A Review on Real-Time 3D Ultrasound Imaging Technology.

    PubMed

    Huang, Qinghua; Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.

  18. Remote gaze tracking system for 3D environments.

    PubMed

    Congcong Liu; Herrup, Karl; Shi, Bertram E

    2017-07-01

    Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.

  19. Real-time depth camera tracking with geometrically stable weight algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming

    2017-03-01

    We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.

  20. Three-Dimensional Rotation, Twist and Torsion Analyses Using Real-Time 3D Speckle Tracking Imaging: Feasibility, Reproducibility, and Normal Ranges in Pediatric Population.

    PubMed

    Zhang, Li; Zhang, Jing; Han, Wei; Gao, Jun; He, Lin; Yang, Yali; Yin, Ping; Xie, Mingxing; Ge, Shuping

    2016-01-01

    The specific aim of this study was to evaluate the feasibility, reproducibility and maturational changes of LV rotation, twist and torsion variables by real-time 3D speckle-tracking echocardiography (RT3DSTE) in children. A prospective study was conducted in 347 consecutive healthy subjects (181 males/156 females, mean age 7.12 ± 5.3 years, and range from birth to 18-years) using RT 3D echocardiography (3DE). The LV rotation, twist and torsion measurements were made off-line using TomTec software. Manual landmark selection and endocardial border editing were performed in 3 planes (apical "2"-, "4"-, and "3"- chamber views) and semi-automated tracking yielded LV rotation, twist and torsion measurements. LV rotation, twist and torsion analysis by RT 3DSTE were feasible in 307 out of 347 subjects (88.5%). There was no correlation between rotation or twist and age, height, weight, BSA or heart rate, respectively. However, there was statistically significant, but very modest correlation between LV torsion and age (R2 = 0.036, P< 0.001). The normal ranges were defined for rotation and twist in this cohort, and for torsion for each age group. The intra-observer and inter-observer variabilities for apical and basal rotation, twist and torsion ranged from 7.3% ± 3.8% to 12.3% ± 8.8% and from 8.8% ± 4.6% to 15.7% ± 10.1%, respectively. We conclude that analysis of LV rotation, twist and torsion by this new RT3D STE is feasible and reproducible in pediatric population. There is no maturational change in rotation and twist, but torsion decreases with age in this cohort. Further refinement is warranted to validate the utility of this new methodology in more sensitive and quantitative evaluation of congenital and acquired heart diseases in children.

  1. Passive Markers for Tracking Surgical Instruments in Real-Time 3-D Ultrasound Imaging

    PubMed Central

    Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E.

    2013-01-01

    A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts. PMID:22042148

  2. MO-FG-BRD-04: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, D.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  3. MO-FG-BRD-02: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MV Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berbeco, R.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  4. MO-FG-BRD-03: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: EM Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keall, P.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  5. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  6. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max).

  7. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

  8. Real-time Awake Animal Motion Tracking System for SPECT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon

    Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments themore » markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.« less

  9. Multisensor fusion for 3D target tracking using track-before-detect particle filter

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

    2015-05-01

    This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.

  10. Towards real-time MRI-guided 3D localization of deforming targets for non-invasive cardiac radiosurgery

    NASA Astrophysics Data System (ADS)

    Ipsen, S.; Blanck, O.; Lowther, N. J.; Liney, G. P.; Rai, R.; Bode, F.; Dunst, J.; Schweikard, A.; Keall, P. J.

    2016-11-01

    Radiosurgery to the pulmonary vein antrum in the left atrium (LA) has recently been proposed for non-invasive treatment of atrial fibrillation (AF). Precise real-time target localization during treatment is necessary due to complex respiratory and cardiac motion and high radiation doses. To determine the 3D position of the LA for motion compensation during radiosurgery, a tracking method based on orthogonal real-time MRI planes was developed for AF treatments with an MRI-guided radiotherapy system. Four healthy volunteers underwent cardiac MRI of the LA. Contractile motion was quantified on 3D LA models derived from 4D scans with 10 phases acquired in end-exhalation. Three localization strategies were developed and tested retrospectively on 2D real-time scans (sagittal, temporal resolution 100 ms, free breathing). The best-performing method was then used to measure 3D target positions in 2D-2D orthogonal planes (sagittal-coronal, temporal resolution 200-252 ms, free breathing) in 20 configurations of a digital phantom and in the volunteer data. The 3D target localization accuracy was quantified in the phantom and qualitatively assessed in the real data. Mean cardiac contraction was  ⩽  3.9 mm between maximum dilation and contraction but anisotropic. A template matching approach with two distinct template phases and ECG-based selection yielded the highest 2D accuracy of 1.2 mm. 3D target localization showed a mean error of 3.2 mm in the customized digital phantoms. Our algorithms were successfully applied to the 2D-2D volunteer data in which we measured a mean 3D LA motion extent of 16.5 mm (SI), 5.8 mm (AP) and 3.1 mm (LR). Real-time target localization on orthogonal MRI planes was successfully implemented for highly deformable targets treated in cardiac radiosurgery. The developed method measures target shifts caused by respiration and cardiac contraction. If the detected motion can be compensated accordingly, an MRI-guided radiotherapy

  11. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  12. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  13. TH-AB-202-05: BEST IN PHYSICS (JOINT IMAGING-THERAPY): First Online Ultrasound-Guided MLC Tracking for Real-Time Motion Compensation in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ipsen, S; Bruder, R; Schweikard, A

    Purpose: While MLC tracking has been successfully used for motion compensation of moving targets, current real-time target localization methods rely on correlation models with x-ray imaging or implanted electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging yields volumetric data in real-time (4D) without ionizing radiation. We report the first results of online 4D ultrasound-guided MLC tracking in a phantom. Methods: A real-time tracking framework was installed on a 4D ultrasound station (Vivid7 dimension, GE) and used to detect a 2mm spherical lead marker inside a water tank. The volumetric frame rate was 21.3Hz (47ms). The marker wasmore » rigidly attached to a motion stage programmed to reproduce nine tumor trajectories (five prostate, four lung). The 3D marker position from ultrasound was used for real-time MLC aperture adaption. The tracking system latency was measured and compensated by prediction for lung trajectories. To measure geometric accuracy, anterior and lateral conformal fields with 10cm circular aperture were delivered for each trajectory. The tracking error was measured as the difference between marker position and MLC aperture in continuous portal imaging. For dosimetric evaluation, 358° VMAT fields were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using a 3%/3 mm γ-test. Results: The tracking system latency was 170ms. The mean root-mean-square tracking error was 1.01mm (0.75mm prostate, 1.33mm lung). Tracking reduced the mean γ-failure rate from 13.9% to 4.6% for prostate and from 21.8% to 0.6% for lung with high-modulation VMAT plans and from 5% (prostate) and 18% (lung) to 0% with low modulation. Conclusion: Real-time ultrasound tracking was successfully integrated with MLC tracking for the first time and showed similar accuracy and latency as other methods while holding the

  14. MO-FG-BRD-01: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: Introduction and KV Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, B.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  15. A comparison of gantry-mounted x-ray-based real-time target tracking methods.

    PubMed

    Montanaro, Tim; Nguyen, Doan Trang; Keall, Paul J; Booth, Jeremy; Caillet, Vincent; Eade, Thomas; Haddad, Carol; Shieh, Chun-Chien

    2018-03-01

    Most modern radiotherapy machines are built with a 2D kV imaging system. Combining this imaging system with a 2D-3D inference method would allow for a ready-made option for real-time 3D tumor tracking. This work investigates and compares the accuracy of four existing 2D-3D inference methods using both motion traces inferred from external surrogates and measured internally from implanted beacons. Tumor motion data from 160 fractions (46 thoracic/abdominal patients) of Synchrony traces (inferred traces), and 28 fractions (7 lung patients) of Calypso traces (internal traces) from the LIGHT SABR trial (NCT02514512) were used in this study. The motion traces were used as the ground truth. The ground truth trajectories were used in silico to generate 2D positions projected on the kV detector. These 2D traces were then passed to the 2D-3D inference methods: interdimensional correlation, Gaussian probability density function (PDF), arbitrary-shape PDF, and the Kalman filter. The inferred 3D positions were compared with the ground truth to determine tracking errors. The relationships between tracking error and motion magnitude, interdimensional correlation, and breathing periodicity index (BPI) were also investigated. Larger tracking errors were observed from the Calypso traces, with RMS and 95th percentile 3D errors of 0.84-1.25 mm and 1.72-2.64 mm, compared to 0.45-0.68 mm and 0.74-1.13 mm from the Synchrony traces. The Gaussian PDF method was found to be the most accurate, followed by the Kalman filter, the interdimensional correlation method, and the arbitrary-shape PDF method. Tracking error was found to strongly and positively correlate with motion magnitude for both the Synchrony and Calypso traces and for all four methods. Interdimensional correlation and BPI were found to negatively correlate with tracking error only for the Synchrony traces. The Synchrony traces exhibited higher interdimensional correlation than the Calypso traces especially in the anterior

  16. SU-D-207-05: Real-Time Intrafractional Motion Tracking During VMAT Delivery Using a Conventional Elekta CBCT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Yang-Kyun; Sharp, Gregory C.; Gierga, David P.

    2015-06-15

    Purpose: Real-time kV projection streaming capability has become recently available for Elekta XVI version 5.0. This study aims to investigate the feasibility and accuracy of real-time fiducial marker tracking during CBCT acquisition with or without simultaneous VMAT delivery using a conventional Elekta linear accelerator. Methods: A client computer was connected to an on-board kV imaging system computer, and receives and processes projection images immediately after image acquisition. In-house marker tracking software based on FFT normalized cross-correlation was developed and installed in the client computer. Three gold fiducial markers with 3 mm length were implanted in a pelvis-shaped phantom with 36more » cm width. The phantom was placed on a programmable motion platform oscillating in anterior-posterior and superior-inferior directions simultaneously. The marker motion was tracked in real-time for (1) a kV-only CBCT scan with treatment beam off and (2) a kV CBCT scan during a 6-MV VMAT delivery. The exposure parameters per projection were 120 kVp and 1.6 mAs. Tracking accuracy was assessed by comparing superior-inferior positions between the programmed and tracked trajectories. Results: The projection images were successfully transferred to the client computer at a frequency of about 5 Hz. In the kV-only scan, highly accurate marker tracking was achieved over the entire range of cone-beam projection angles (detection rate / tracking error were 100.0% / 0.6±0.5 mm). In the kV-VMAT scan, MV-scatter degraded image quality, particularly for lateral projections passing through the thickest part of the phantom (kV source angle ranging 70°-110° and 250°-290°), resulting in a reduced detection rate (90.5%). If the lateral projections are excluded, tracking performance was comparable to the kV-only case (detection rate / tracking error were 100.0% / 0.8±0.5 mm). Conclusion: Our phantom study demonstrated a promising Result for real-time motion tracking using a

  17. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  18. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  19. Real-time structured light intraoral 3D measurement pipeline

    NASA Astrophysics Data System (ADS)

    Gheorghe, Radu; Tchouprakov, Andrei; Sokolov, Roman

    2013-02-01

    Computer aided design and manufacturing (CAD/CAM) is increasingly becoming a standard feature and service provided to patients in dentist offices and denture manufacturing laboratories. Although the quality of the tools and data has slowly improved in the last years, due to various surface measurement challenges, practical, accurate, invivo, real-time 3D high quality data acquisition and processing still needs improving. Advances in GPU computational power have allowed for achieving near real-time 3D intraoral in-vivo scanning of patient's teeth. We explore in this paper, from a real-time perspective, a hardware-software-GPU solution that addresses all the requirements mentioned before. Moreover we exemplify and quantify the hard and soft deadlines required by such a system and illustrate how they are supported in our implementation.

  20. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  1. Dense 3D Face Alignment from 2D Video for Real-Time Use

    PubMed Central

    Jeni, László A.; Cohn, Jeffrey F.; Kanade, Takeo

    2018-01-01

    To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person’s face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org. PMID:29731533

  2. Large holographic 3D display for real-time computer-generated holography

    NASA Astrophysics Data System (ADS)

    Häussler, R.; Leister, N.; Stolle, H.

    2017-06-01

    SeeReal's concept of real-time holography is based on Sub-Hologram encoding and tracked Viewing Windows. This solution leads to significant reduction of pixel count and computation effort compared to conventional holography concepts. Since the first presentation of the concept, improved full-color holographic displays were built with dedicated components. The hologram is encoded on a spatial light modulator that is a sandwich of a phase-modulating and an amplitude-modulating liquid-crystal display and that modulates amplitude and phase of light. Further components are based on holographic optical elements for light collimation and focusing which are exposed in photopolymer films. Camera photographs show that only the depth region on which the focus of the camera lens is set is in focus while the other depth regions are out of focus. These photographs demonstrate that the 3D scene is reconstructed in depth and that accommodation of the eye lenses is supported. Hence, the display is a solution to overcome the accommodationconvergence conflict that is inherent for stereoscopic 3D displays. The main components, progress and results of the holographic display with 300 mm x 200 mm active area are described. Furthermore, photographs of holographic reconstructed 3D scenes are shown.

  3. Snapshot 3D tracking of insulin granules in live cells

    NASA Astrophysics Data System (ADS)

    Wang, Xiaolei; Huang, Xiang; Gdor, Itay; Daddysman, Matthew; Yi, Hannah; Selewa, Alan; Haunold, Theresa; Hereld, Mark; Scherer, Norbert F.

    2018-02-01

    Rapid and accurate volumetric imaging remains a challenge, yet has the potential to enhance understanding of cell function. We developed and used a multifocal microscope (MFM) for 3D snapshot imaging to allow 3D tracking of insulin granules labeled with mCherry in MIN6 cells. MFM employs a special diffractive optical element (DOE) to simultaneously image multiple focal planes. This simultaneous acquisition of information determines the 3D location of single objects at a speed only limited by the array detector's frame rate. We validated the accuracy of MFM imaging/tracking with fluorescence beads; the 3D positions and trajectories of single fluorescence beads can be determined accurately over a wide range of spatial and temporal scales. The 3D positions and trajectories of single insulin granules in a 3.2um deep volume were determined with imaging processing that combines 3D decovolution, shift correction, and finally tracking using the Imaris software package. We find that the motion of the granules is superdiffusive, but less so in 3D than 2D for cells grown on coverslip surfaces, suggesting an anisotropy in the cytoskeleton (e.g. microtubules and action).

  4. Registration of clinical volumes to beams-eye-view images for real-time tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield unitsmore » into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.« less

  5. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.

    PubMed

    Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-08-23

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  6. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    PubMed Central

    Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-01-01

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520

  7. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  8. A Biocompatible Near-Infrared 3D Tracking System*

    PubMed Central

    Decker, Ryan S.; Shademan, Azad; Opfermann, Justin D.; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2017-01-01

    A fundamental challenge in soft-tissue surgery is that target tissue moves and deforms, becomes occluded by blood or other tissue, and is difficult to differentiate from surrounding tissue. We developed small biocompatible near-infrared fluorescent (NIRF) markers with a novel fused plenoptic and NIR camera tracking system, enabling 3D tracking of tools and target tissue while overcoming blood and tissue occlusion in the uncontrolled, rapidly changing surgical environment. In this work, we present the tracking system and marker design and compare tracking accuracies to standard optical tracking methods using robotic experiments. At speeds of 1 mm/s, we observe tracking accuracies of 1.61 mm, degrading only to 1.71 mm when the markers are covered in blood and tissue. PMID:28129145

  9. Accelerating volumetric cine MRI (VC-MRI) using undersampling for real-time 3D target localization/tracking in radiation therapy: a feasibility study

    NASA Astrophysics Data System (ADS)

    Harris, Wendy; Yin, Fang-Fang; Wang, Chunhao; Zhang, You; Cai, Jing; Ren, Lei

    2018-01-01

    Purpose. To accelerate volumetric cine MRI (VC-MRI) using undersampled 2D-cine MRI to provide real-time 3D guidance for gating/target tracking in radiotherapy. Methods. 4D-MRI is acquired during patient simulation. One phase of the prior 4D-MRI is selected as the prior images, designated as MRIprior. The on-board VC-MRI at each time-step is considered a deformation of the MRIprior. The deformation field map is represented as a linear combination of the motion components extracted by principal component analysis from the prior 4D-MRI. The weighting coefficients of the motion components are solved by matching the corresponding 2D-slice of the VC-MRI with the on-board undersampled 2D-cine MRI acquired. Undersampled Cartesian and radial k-space acquisition strategies were investigated. The effects of k-space sampling percentage (SP) and distribution, tumor sizes and noise on the VC-MRI estimation were studied. The VC-MRI estimation was evaluated using XCAT simulation of lung cancer patients and data from liver cancer patients. Volume percent difference (VPD) and Center of Mass Shift (COMS) of the tumor volumes and tumor tracking errors were calculated. Results. For XCAT, VPD/COMS were 11.93  ±  2.37%/0.90  ±  0.27 mm and 11.53  ±  1.47%/0.85  ±  0.20 mm among all scenarios with Cartesian sampling (SP  =  10%) and radial sampling (21 spokes, SP  =  5.2%), respectively. When tumor size decreased, higher sampling rate achieved more accurate VC-MRI than lower sampling rate. VC-MRI was robust against noise levels up to SNR  =  20. For patient data, the tumor tracking errors in superior-inferior, anterior-posterior and lateral (LAT) directions were 0.46  ±  0.20 mm, 0.56  ±  0.17 mm and 0.23  ±  0.16 mm, respectively, for Cartesian-based sampling with SP  =  20% and 0.60  ±  0.19 mm, 0.56  ±  0.22 mm and 0.42  ±  0.15 mm, respectively, for

  10. Real-time 3-D ultrafast ultrasound quasi-static elastography in vivo

    PubMed Central

    Papadacci, Clement; Bunting, Ethan A.; Konofagou, Elisa E.

    2017-01-01

    Ultrasound elastography, a technique used to assess mechanical properties of soft tissue is of major interest in the detection of breast cancer as it is stiffer than the surroundings. Techniques such as ultrasound quasi-static elastography have been developed to assess the strain distribution in soft tissues in two dimensions using a quasi-static compression. However, tumors can exhibit very heterogeneous shape, a three dimensions approach would be then necessary to measure accurately the tumor volume and remove operator dependency. To ensure this issue, several 3-D quasi-static elastographic approaches have been proposed. However, all these approaches suffered from a long acquisition time to acquire 3-D volumes resulting in the impossibility to perform real-time and the creation of artifacts. The long acquisition time comes from both the use of focused ultrasound emissions and the fact that the volume was made from a stack of two dimensions images acquired by mechanically translating an ultrasonic array. Being able to acquire volume at high volume rates is thus crucial to perform real-time with a simple freehand compression and to avoid signal decorrelation coming from hand motions or natural motions such as the respiratory. In this study we developed for the first time, the 3-D ultrafast ultrasound quasi-static elastography method to estimate 3-D axial strain distribution in vivo in real-time. Acquisitions were performed with a 2-D matrix array probe of 256 elements (16-by-16 elements). 100 plane waves were emitted at a volume rate of 100 volumes/sec during a continuous motorized compression. 3-D B-mode volumes and 3-D B-mode cumulative axial strain volumes were estimated on a two-layers gelatin phantom with different stiffness, in a stiff inclusion embedded in a soft gelatin phantoms, in a soft inclusion embedded in a stiff gelatin phantom and in an ex vivo canine liver before and after a high focused ultrasound (HIFU) ablation. In each case, we were able to

  11. MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  12. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  13. On the dynamics of jellyfish locomotion via 3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Piper, Matthew; Kim, Jin-Tae; Chamorro, Leonardo P.

    2016-11-01

    The dynamics of jellyfish (Aurelia aurita) locomotion is experimentally studied via 3D particle tracking velocimetry. 3D locations of the bell tip are tracked over 1.5 cycles to describe the jellyfish path. Multiple positions of the jellyfish bell margin are initially tracked in 2D from four independent planes and individually projected in 3D based on the jellyfish path and geometrical properties of the setup. A cubic spline interpolation and the exponentially weighted moving average are used to estimate derived quantities, including velocity and acceleration of the jellyfish locomotion. We will discuss distinctive features of the jellyfish 3D motion at various swimming phases, and will provide insight on the 3D contraction and relaxation in terms of the locomotion, the steadiness of the bell margin eccentricity, and local Reynolds number based on the instantaneous mean diameter of the bell.

  14. Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Helferty, James P.; Padfield, Dirk R.

    2003-05-01

    Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.

  15. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  16. 3D Printed "Earable" Smart Devices for Real-Time Detection of Core Body Temperature.

    PubMed

    Ota, Hiroki; Chao, Minghan; Gao, Yuji; Wu, Eric; Tai, Li-Chia; Chen, Kevin; Matsuoka, Yasutomo; Iwai, Kosuke; Fahad, Hossain M; Gao, Wei; Nyein, Hnin Yin Yin; Lin, Liwei; Javey, Ali

    2017-07-28

    Real-time detection of basic physiological parameters such as blood pressure and heart rate is an important target in wearable smart devices for healthcare. Among these, the core body temperature is one of the most important basic medical indicators of fever, insomnia, fatigue, metabolic functionality, and depression. However, traditional wearable temperature sensors are based upon the measurement of skin temperature, which can vary dramatically from the true core body temperature. Here, we demonstrate a three-dimensional (3D) printed wearable "earable" smart device that is designed to be worn on the ear to track core body temperature from the tympanic membrane (i.e., ear drum) based on an infrared sensor. The device is fully integrated with data processing circuits and a wireless module for standalone functionality. Using this smart earable device, we demonstrate that the core body temperature can be accurately monitored regardless of the environment and activity of the user. In addition, a microphone and actuator are also integrated so that the device can also function as a bone conduction hearing aid. Using 3D printing as the fabrication method enables the device to be customized for the wearer for more personalized healthcare. This smart device provides an important advance in realizing personalized health care by enabling real-time monitoring of one of the most important medical parameters, core body temperature, employed in preliminary medical screening tests.

  17. A real-time moment-tensor inversion system (GRiD-MT-3D) using 3-D Green's functions

    NASA Astrophysics Data System (ADS)

    Nagao, A.; Furumura, T.; Tsuruoka, H.

    2016-12-01

    We developed a real-time moment-tensor inversion system using 3-D Green's functions (GRiD-MT-3D) by improving the current system (GRiD-MT; Tsuruoka et al., 2009), which uses 1-D Green's functions for longer periods than 20 s. Our moment-tensor inversion is applied to the real-time monitoring of earthquakes occurring beneath Kanto basin area. The basin, which is constituted of thick sediment layers, lies on the complex subduction of the Philippine-Sea Plate and the Pacific Plate that can significantly affect the seismic wave propagation. We compute 3-D Green's functions using finite-difference-method (FDM) simulations considering a 3-D velocity model, which is based on the Japan Integrated Velocity Structure Model (Koketsu et al., 2012), that includes crust, mantle, and subducting plates. The 3-D FDM simulations are computed over a volume of 468 km by 432 km by 120 km in the EW, NS, and depth directions, respectively, that is discretized into 0.25 km grids. Considering that the minimum S wave velocity of the sedimentary layer is 0.5 km/s, simulations can compute seismograms up to 0.5 Hz. We calculate Green's functions between 24,700 sources, which are distributed every 0.1° in the horizontal direction and every 9 km in depth direction, and 13 F-net stations. To compute this large number of Green's functions, we used the EIC parallel computer of ERI. The reciprocity theory, which switches the source and station positions, is used to reduce total computation costs. It took 156 hours to compute all the Green's functions. Results show that at long-periods (T>15 s), only small differences are observed between the 3-D and 1-D Green's functions as indicated by high correlation coefficients of 0.9 between the waveforms. However, at shorter periods (T<10 s), the differences become larger and the correlation coefficients drop to 0.5. The effect of the 3-D heterogeneous structure especially affects the Green's functions for the ray paths that across complex geological

  18. Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry

    2016-03-01

    In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.

  19. Ultra-Wideband Time-Difference-of-Arrival High Resolution 3D Proximity Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2010-01-01

    This paper describes a research and development effort for a prototype ultra-wideband (UWB) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being studied for use in tracking of lunar./Mars rovers and astronauts during early exploration missions when satellite navigation systems are not available. U IATB impulse radio (UWB-IR) technology is exploited in the design and implementation of the prototype location and tracking system. A three-dimensional (3D) proximity tracking prototype design using commercially available UWB products is proposed to implement the Time-Difference- Of-Arrival (TDOA) tracking methodology in this research effort. The TDOA tracking algorithm is utilized for location estimation in the prototype system, not only to exploit the precise time resolution possible with UWB signals, but also to eliminate the need for synchronization between the transmitter and the receiver. Simulations show that the TDOA algorithm can achieve the fine tracking resolution with low noise TDOA estimates for close-in tracking. Field tests demonstrated that this prototype UWB TDOA High Resolution 3D Proximity Tracking System is feasible for providing positioning-awareness information in a 3D space to a robotic control system. This 3D tracking system is developed for a robotic control system in a facility called "Moonyard" at Honeywell Defense & System in Arizona under a Space Act Agreement.

  20. Real-time 3-D space numerical shake prediction for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  1. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    PubMed

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Real-time 3-D contrast-enhanced transcranial ultrasound and aberration correction.

    PubMed

    Ivancevich, Nikolas M; Pinton, Gianmarco F; Nicoletto, Heather A; Bennett, Ellen; Laskowitz, Daniel T; Smith, Stephen W

    2008-09-01

    Contrast-enhanced (CE) transcranial ultrasound (US) and reconstructed 3-D transcranial ultrasound have shown advantages over traditional methods in a variety of cerebrovascular diseases. We present the results from a novel ultrasound technique, namely real-time 3-D contrast-enhanced transcranial ultrasound. Using real-time 3-D (RT3D) ultrasound and microbubble contrast agent, we scanned 17 healthy volunteers via a single temporal window and nine via the suboccipital window and report our detection rates for the major cerebral vessels. In 71% of subjects, both of our observers identified the ipsilateral circle of Willis from the temporal window, and in 59% we imaged the entire circle of Willis. From the suboccipital window, both observers detected the entire vertebrobasilar circulation in 22% of subjects, and in 44%, the basilar artery. After performing phase aberration correction on one subject, we were able to increase the diagnostic value of the scan, detecting a vessel not present in the uncorrected scan. These preliminary results suggest that RT3D CE transcranial US and RT3D CE transcranial US with phase aberration correction have the potential to greatly impact the field of neurosonology.

  3. Real-Time 3D Contrast-Enhanced Transcranial Ultrasound and Aberration Correction

    PubMed Central

    Ivancevich, Nikolas M.; Pinton, Gianmarco F.; Nicoletto, Heather A.; Bennett, Ellen; Laskowitz, Daniel T.; Smith, Stephen W.

    2008-01-01

    Contrast-enhanced (CE) transcranial ultrasound (US) and reconstructed 3D transcranial ultrasound have shown advantages over traditional methods in a variety of cerebrovascular diseases. We present the results from a novel ultrasound technique, namely real-time 3D contrast-enhanced transcranial ultrasound. Using real-time 3D (RT3D) ultrasound and micro-bubble contrast agent, we scanned 17 healthy volunteers via a single temporal window and 9 via the sub-occipital window and report our detection rates for the major cerebral vessels. In 71% of subjects, both of our observers identified the ipsilateral circle of Willis from the temporal window, and in 59% we imaged the entire circle of Willis. From the sub-occipital window, both observers detected the entire vertebrobasilar circulation in 22% of subjects, and in 44% the basilar artery. After performing phase aberration correction on one subject, we were able to increase the diagnostic value of the scan, detecting a vessel not present in the uncorrected scan. These preliminary results suggest that RT3D CE transcranial US and RT3D CE transcranial US with phase aberration correction have the potential to greatly impact the field of neurosonology. PMID:18395321

  4. Probing the benefits of real-time tracking during cancer care

    PubMed Central

    Patel, Rupa A.; Klasnja, Predrag; Hartzler, Andrea; Unruh, Kenton T.; Pratt, Wanda

    2012-01-01

    People with cancer experience many unanticipated symptoms and struggle to communicate them to clinicians. Although researchers have developed patient-reported outcome (PRO) tools to address this problem, such tools capture retrospective data intended for clinicians to review. In contrast, real-time tracking tools with visible results for patients could improve health outcomes and communication with clinicians, while also enhancing patients’ symptom management. To understand potential benefits of such tools, we studied the tracking behaviors of 25 women with breast cancer. We provided 10 of these participants with a real-time tracking tool that served as a “technology probe” to uncover behaviors and benefits from voluntary use. Our findings showed that while patients’ tracking behaviors without a tool were fragmented and sporadic, these behaviors with a tool were more consistent. Participants also used tracked data to see patterns among symptoms, feel psychosocial comfort, and improve symptom communication with clinicians. We conclude with design implications for future real-time tracking tools. PMID:23304413

  5. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  6. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  7. Feature point based 3D tracking of multiple fish from multi-view images

    PubMed Central

    Qian, Zhi-Ming

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966

  8. Feature point based 3D tracking of multiple fish from multi-view images.

    PubMed

    Qian, Zhi-Ming; Chen, Yan Qiu

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.

  9. SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhlemann, I; Graduate School for Computing in Medicine and Life Sciences, University of Luebeck; Jauer, P

    Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety featuresmore » create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking

  10. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  11. Real-time optical holographic tracking of multiple objects

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1989-01-01

    A coherent optical correlation technique for real-time simultaneous tracking of several different objects making independent movements is described, and experimental results are presented. An evaluation of this system compared with digital computing systems is made. The real-time processing capability is obtained through the use of a liquid crystal television spatial light modulator and a dichromated gelatin multifocus hololens. A coded reference beam is utilized in the separation of the output correlation plane associated with each input target so that independent tracking can be achieved.

  12. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  13. Real-time 3D transesophageal echocardiography for the evaluation of rheumatic mitral stenosis.

    PubMed

    Schlosshan, Dominik; Aggarwal, Gunjan; Mathur, Gita; Allan, Roger; Cranney, Greg

    2011-06-01

    The aims of this study were: 1) to assess the feasibility and reliability of performing mitral valve area (MVA) measurements in patients with rheumatic mitral valve stenosis (RhMS) using real-time 3-dimensional transesophageal echocardiography (3DTEE) planimetry (MVA(3D)); 2) to compare MVA(3D) with conventional techniques: 2-dimensional (2D) planimetry (MVA(2D)), pressure half-time (MVA(PHT)), and continuity equation (MVA(CON)); and 3) to evaluate the degree of mitral commissural fusion. 3DTEE is a novel technique that provides excellent image quality of the mitral valve. Real-time 3DTEE is a relatively recent enhancement of this technique. To date, there have been no feasibility studies investigating the utility of real-time 3DTEE in the assessment of RhMS. Forty-three consecutive patients referred for echocardiographic evaluation of RhMS and suitability for percutaneous mitral valvuloplasty were assessed using 2D transthoracic echocardiography and real-time 3DTEE. MVA(3D), MVA(2D), MVA(PHT), MVA(CON), and the degree of commissural fusion were evaluated. MVA(3D) assessment was possible in 41 patients (95%). MVA(3D) measurements were significantly lower compared with MVA(2D) (mean difference: -0.16 ± 0.22; n=25, p<0.005) and MVA(PHT) (mean difference: -0.23 ± 0.28 cm(2); n=39, p<0.0001) but marginally greater than MVA(CON) (mean difference: 0.05 ± 0.22 cm(2); n=24, p=0.82). MVA(3D) demonstrated best agreement with MVA(CON) (intraclass correlation coefficient [ICC] 0.83), followed by MVA(2D) (ICC 0.79) and MVA(PHT) (ICC 0.58). Interobserver and intraobserver agreement was excellent for MVA(3D), with ICCs of 0.93 and 0.96, respectively. Excellent commissural evaluation was possible in all patients using 3DTEE. Compared with 3DTEE, underestimation of the degree of commissural fusion using 2D transthoracic echocardiography was observed in 19%, with weak agreement between methods (κ<0.4). MVA planimetry is feasible in the majority of patients with RhMS using 3DTEE

  14. LayTracks3D: A new approach for meshing general solids using medial axis transform

    DOE PAGES

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less

  15. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  16. Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking

    NASA Astrophysics Data System (ADS)

    Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.

    2017-03-01

    Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.

  17. Handheld portable real-time tracking and communications device

    DOEpatents

    Wiseman, James M [Albuquerque, NM; Riblett, Jr., Loren E.; Green, Karl L [Albuquerque, NM; Hunter, John A [Albuquerque, NM; Cook, III, Robert N.; Stevens, James R [Arlington, VA

    2012-05-22

    Portable handheld real-time tracking and communications devices include; a controller module, communications module including global positioning and mesh network radio module, data transfer and storage module, and a user interface module enclosed in a water-resistant enclosure. Real-time tracking and communications devices can be used by protective force, security and first responder personnel to provide situational awareness allowing for enhance coordination and effectiveness in rapid response situations. Such devices communicate to other authorized devices via mobile ad-hoc wireless networks, and do not require fixed infrastructure for their operation.

  18. Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Douglas, E-mail: douglas.moore@utsouthwestern.edu; Sawant, Amit; Ruan, Dan

    2016-01-15

    Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for themore » trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et

  19. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  20. Real-time 3D change detection of IEDs

    NASA Astrophysics Data System (ADS)

    Wathen, Mitch; Link, Norah; Iles, Peter; Jinkerson, John; Mrstik, Paul; Kusevic, Kresimir; Kovats, David

    2012-06-01

    Road-side bombs are a real and continuing threat to soldiers in theater. CAE USA recently developed a prototype Volume based Intelligence Surveillance Reconnaissance (VISR) sensor platform for IED detection. This vehicle-mounted, prototype sensor system uses a high data rate LiDAR (1.33 million range measurements per second) to generate a 3D mapping of roadways. The mapped data is used as a reference to generate real-time change detection on future trips on the same roadways. The prototype VISR system is briefly described. The focus of this paper is the methodology used to process the 3D LiDAR data, in real-time, to detect small changes on and near the roadway ahead of a vehicle traveling at moderate speeds with sufficient warning to stop the vehicle at a safe distance from the threat. The system relies on accurate navigation equipment to geo-reference the reference run and the change-detection run. Since it was recognized early in the project that detection of small changes could not be achieved with accurate navigation solutions alone, a scene alignment algorithm was developed to register the reference run with the change detection run prior to applying the change detection algorithm. Good success was achieved in simultaneous real time processing of scene alignment plus change detection.

  1. Investigation on microfluidic particles manipulation by holographic 3D tracking strategies

    NASA Astrophysics Data System (ADS)

    Cacace, Teresa; Paturzo, Melania; Memmolo, Pasquale; Vassalli, Massimo; Fraldi, Massimiliano; Mensitieri, Giuseppe; Ferraro, Pietro

    2017-06-01

    We demonstrate a 3D holographic tracking method to investigate particles motion in a microfluidic channel while unperturbed while inducing their migration through microfluidic manipulation. Digital holography (DH) in microscopy is a full-field, label-free imaging technique able to provide quantitative phase-contrast. The employed 3D tracking method is articulated in steps. First, the displacements along the optical axis are assessed by numerical refocusing criteria. In particular, an automatic refocusing method to recover the particles axial position is implemented employing a contrast-based refocusing criterion. Then, the transverse position of the in-focus object is evaluated through quantitative phase map segmentation methods and centroid-based 2D tracking strategy. The introduction of DH is thus suggested as a powerful approach for control of particles and biological samples manipulation, as well as a possible aid to precise design and implementation of advanced lab-on-chip microfluidic devices.

  2. Real-time tracking of liver motion and deformation using a flexible needle

    PubMed Central

    Lei, Peng; Moeslein, Fred; Wood, Bradford J.

    2012-01-01

    Purpose A real-time 3D image guidance system is needed to facilitate treatment of liver masses using radiofrequency ablation, for example. This study investigates the feasibility and accuracy of using an electromagnetically tracked flexible needle inserted into the liver to track liver motion and deformation. Methods This proof-of-principle study was conducted both ex vivo and in vivo with a CT scanner taking the place of an electromagnetic tracking system as the spatial tracker. Deformations of excised livers were artificially created by altering the shape of the stage on which the excised livers rested. Free breathing or controlled ventilation created deformations of live swine livers. The positions of the needle and test targets were determined through CT scans. The shape of the needle was reconstructed using data simulating multiple embedded electromagnetic sensors. Displacement of liver tissues in the vicinity of the needle was derived from the change in the reconstructed shape of the needle. Results The needle shape was successfully reconstructed with tracking information of two on-needle points. Within 30 mm of the needle, the registration error of implanted test targets was 2.4 ± 1.0 mm ex vivo and 2.8 ± 1.5 mm in vivo. Conclusion A practical approach was developed to measure the motion and deformation of the liver in real time within a region of interest. The approach relies on redesigning the often-used seeker needle to include embedded electromagnetic tracking sensors. With the nonrigid motion and deformation information of the tracked needle, a single- or multimodality 3D image of the intraprocedural liver, now clinically obtained with some delay, can be updated continuously to monitor intraprocedural changes in hepatic anatomy. This capability may be useful in radiofrequency ablation and other percutaneous ablative procedures. PMID:20700662

  3. Real-time target tracking and locating system for UAV

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Tang, Linbo; Fu, Huiquan; Li, Maowen

    2017-07-01

    In order to achieve real-time target tracking and locating for UAV, a reliable processing system is built on the embedded platform. Firstly, the video image is acquired in real time by the photovoltaic system on the UAV. When the target information is known, KCF tracking algorithm is adopted to track the target. Then, the servo is controlled to rotate with the target, when the target is in the center of the image, the laser ranging module is opened to obtain the distance between the UAV and the target. Finally, to combine with UAV flight parameters obtained by BeiDou navigation system, through the target location algorithm to calculate the geodetic coordinates of the target. The results show that the system is stable for real-time tracking of targets and positioning.

  4. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  5. Real-time 3D ultrasound imaging of infant tongue movements during breast-feeding.

    PubMed

    Burton, Pat; Deng, Jing; McDonald, Daren; Fewtrell, Mary S

    2013-09-01

    Whether infants use suction or peristaltic tongue movements or a combination to extract milk during breast-feeding is controversial. The aims of this pilot study were 1] to evaluate the feasibility of using 3D ultrasound scanning to visualise infant tongue movements; and 2] to ascertain whether peristaltic tongue movements could be demonstrated during breast-feeding. 15 healthy term infants, aged 2 weeks to 4 months were scanned during breast-feeding, using a real-time 3D ultrasound system, with a 7 MHz transducer placed sub-mentally. 1] The method proved feasible, with 72% of bi-plane datasets and 56% of real-time 3D datasets providing adequate coverage [>75%] of the infant tongue. 2] Peristaltic tongue movement was observed in 13 of 15 infants [83%] from real-time or reformatted truly mid-sagittal views under 3D guidance. This is the first study to demonstrate the feasibility of using 3D ultrasound to visualise infant tongue movements during breast-feeding. Peristaltic infant tongue movement was present in the majority of infants when the image plane was truly mid-sagittal but was not apparent if the image was slightly off the mid-sagittal plane. This should be considered in studies investigating the relative importance of vacuum and peristalsis for milk transfer. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Confocal fluorometer for diffusion tracking in 3D engineered tissue constructs

    NASA Astrophysics Data System (ADS)

    Daly, D.; Zilioli, A.; Tan, N.; Buttenschoen, K.; Chikkanna, B.; Reynolds, J.; Marsden, B.; Hughes, C.

    2016-03-01

    We present results of the development of a non-contacting instrument, called fScan, based on scanning confocal fluorometry for assessing the diffusion of materials through a tissue matrix. There are many areas in healthcare diagnostics and screening where it is now widely accepted that the need for new quantitative monitoring technologies is a major pinch point in patient diagnostics and in vitro testing. With the increasing need to interpret 3D responses this commonly involves the need to track the diffusion of compounds, pharma-active species and cells through a 3D matrix of tissue. Methods are available but to support the advances that are currently only promised, this monitoring needs to be real-time, non-invasive, and economical. At the moment commercial meters tend to be invasive and usually require a sample of the medium to be removed and processed prior to testing. This methodology clearly has a number of significant disadvantages. fScan combines a fiber based optical arrangement with a compact, free space optical front end that has been integrated so that the sample's diffusion can be measured without interference. This architecture is particularly important due to the "wet" nature of the samples. fScan is designed to measure constructs located within standard well plates and a 2-D motion stage locates the required sample with respect to the measurement system. Results are presented that show how the meter has been used to evaluate movements of samples through collagen constructs in situ without disturbing their kinetic characteristics. These kinetics were little understood prior to these measurements.

  7. Automated real-time needle-guide tracking for fast 3-T MR-guided transrectal prostate biopsy: a feasibility study.

    PubMed

    Zamecnik, Patrik; Schouten, Martijn G; Krafft, Axel J; Maier, Florian; Schlemmer, Heinz-Peter; Barentsz, Jelle O; Bock, Michael; Fütterer, Jurgen J

    2014-12-01

    To assess the feasibility of automatic needle-guide tracking by using a real-time phase-only cross correlation ( POCC phase-only cross correlation ) algorithm-based sequence for transrectal 3-T in-bore magnetic resonance (MR)-guided prostate biopsies. This study was approved by the ethics review board, and written informed consent was obtained from all patients. Eleven patients with a prostate-specific antigen level of at least 4 ng/mL (4 μg/L) and at least one transrectal ultrasonography-guided biopsy session with negative findings were enrolled. Regions suspicious for cancer were identified on 3-T multiparametric MR images. During a subsequent MR-guided biopsy, the regions suspicious for cancer were reidentified and targeted by using the POCC phase-only cross correlation -based tracking sequence. Besides testing a general technical feasibility of the biopsy procedure by using the POCC phase-only cross correlation -based tracking sequence, the procedure times were measured, and a pathologic analysis of the biopsy cores was performed. Thirty-eight core samples were obtained from 25 regions suspicious for cancer. It was technically feasible to perform the POCC phase-only cross correlation -based biopsies in all regions suspicious for cancer in each patient, with adequate biopsy samples obtained with each biopsy attempt. The median size of the region suspicious for cancer was 8 mm (range, 4-13 mm). In each region suspicious for cancer (median number per patient, two; range, 1-4), a median of one core sample per region was obtained (range, 1-3). The median time for guidance per target was 1.5 minutes (range, 0.7-5 minutes). Nineteen of 38 core biopsy samples contained cancer. This study shows that it is feasible to perform transrectal 3-T MR-guided biopsies by using a POCC phase-only cross correlation algorithm-based real-time tracking sequence. © RSNA, 2014.

  8. A real-time dynamic-MLC control algorithm for delivering IMRT to targets undergoing 2D rigid motion in the beam's eye view.

    PubMed

    McMahon, Ryan; Berbeco, Ross; Nishioka, Seiko; Ishikawa, Masayori; Papiez, Lech

    2008-09-01

    An MLC control algorithm for delivering intensity modulated radiation therapy (IMRT) to targets that are undergoing two-dimensional (2D) rigid motion in the beam's eye view (BEV) is presented. The goal of this method is to deliver 3D-derived fluence maps over a moving patient anatomy. Target motion measured prior to delivery is first used to design a set of planned dynamic-MLC (DMLC) sliding-window leaf trajectories. During actual delivery, the algorithm relies on real-time feedback to compensate for target motion that does not agree with the motion measured during planning. The methodology is based on an existing one-dimensional (ID) algorithm that uses on-the-fly intensity calculations to appropriately adjust the DMLC leaf trajectories in real-time during exposure delivery [McMahon et al., Med. Phys. 34, 3211-3223 (2007)]. To extend the 1D algorithm's application to 2D target motion, a real-time leaf-pair shifting mechanism has been developed. Target motion that is orthogonal to leaf travel is tracked by appropriately shifting the positions of all MLC leaves. The performance of the tracking algorithm was tested for a single beam of a fractionated IMRT treatment, using a clinically derived intensity profile and a 2D target trajectory based on measured patient data. Comparisons were made between 2D tracking, 1D tracking, and no tracking. The impact of the tracking lag time and the frequency of real-time imaging were investigated. A study of the dependence of the algorithm's performance on the level of agreement between the motion measured during planning and delivery was also included. Results demonstrated that tracking both components of the 2D motion (i.e., parallel and orthogonal to leaf travel) results in delivered fluence profiles that are superior to those that track the component of motion that is parallel to leaf travel alone. Tracking lag time effects may lead to relatively large intensity delivery errors compared to the other sources of error investigated

  9. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.

  10. Potential benefits of dosimetric VMAT tracking verified with 3D film measurements.

    PubMed

    Crijns, Wouter; Defraene, Gilles; Van Herck, Hans; Depuydt, Tom; Haustermans, Karin; Maes, Frederik; Van den Heuvel, Frank

    2016-05-01

    To evaluate three different plan adaptation strategies using 3D film-stack dose measurements of both focal boost and hypofractionated prostate VMAT treatments. The adaptation strategies (a couch shift, geometric tracking, and dosimetric tracking) were applied for three realistic intrafraction prostate motions. A focal boost (35 × 2.2 and 35 × 2.7 Gy) and a hypofractionated (5 × 7.25 Gy) prostate VMAT plan were created for a heterogeneous phantom that allows for internal prostate motion. For these plans geometric tracking and dosimetric tracking were evaluated by ionization chamber (IC) point dose measurements (zero-D) and measurements using a stack of EBT3 films (3D). The geometric tracking applied translations, rotations, and scaling of the MLC aperture in response to realistic prostate motions. The dosimetric tracking additionally corrected the monitor units to resolve variations due to difference in depth, tissue heterogeneity, and MLC-aperture. The tracking was based on the positions of four fiducial points only. The film measurements were compared to the gold standard (i.e., IC measurements) and the planned dose distribution. Additionally, the 3D measurements were converted to dose volume histograms, tumor control probability, and normal tissue complication probability parameters (DVH/TCP/NTCP) as a direct estimate of clinical relevance of the proposed tracking. Compared to the planned dose distribution, measurements without prostate motion and tracking showed already a reduced homogeneity of the dose distribution. Adding prostate motion further blurs the DVHs for all treatment approaches. The clinical practice (no tracking) delivered the dose distribution inside the PTV but off target (CTV), resulting in boost dose errors up to 10%. The geometric and dosimetric tracking corrected the dose distribution's position. Moreover, the dosimetric tracking could achieve the planned boost DVH, but not the DVH of the more homogeneously irradiated prostate. A drawback

  11. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  12. SU-G-BRA-05: Application of a Feature-Based Tracking Algorithm to KV X-Ray Fluoroscopic Images Toward Marker-Less Real-Time Tumor Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, M; Matsuo, Y; Mukumoto, N

    Purpose: To detect target position on kV X-ray fluoroscopic images using a feature-based tracking algorithm, Accelerated-KAZE (AKAZE), for markerless real-time tumor tracking (RTTT). Methods: Twelve lung cancer patients treated with RTTT on the Vero4DRT (Mitsubishi Heavy Industries, Japan, and Brainlab AG, Feldkirchen, Germany) were enrolled in this study. Respiratory tumor movement was greater than 10 mm. Three to five fiducial markers were implanted around the lung tumor transbronchially for each patient. Before beam delivery, external infrared (IR) markers and the fiducial markers were monitored for 20 to 40 s with the IR camera every 16.7 ms and with an orthogonalmore » kV x-ray imaging subsystem every 80 or 160 ms, respectively. Target positions derived from the fiducial markers were determined on the orthogonal kV x-ray images, which were used as the ground truth in this study. Meanwhile, tracking positions were identified by AKAZE. Among a lot of feature points, AKAZE found high-quality feature points through sequential cross-check and distance-check between two consecutive images. Then, these 2D positional data were converted to the 3D positional data by a transformation matrix with a predefined calibration parameter. Root mean square error (RMSE) was calculated to evaluate the difference between 3D tracking and target positions. A total of 393 frames was analyzed. The experiment was conducted on a personal computer with 16 GB RAM, Intel Core i7-2600, 3.4 GHz processor. Results: Reproducibility of the target position during the same respiratory phase was 0.6 +/− 0.6 mm (range, 0.1–3.3 mm). Mean +/− SD of the RMSEs was 0.3 +/− 0.2 mm (range, 0.0–1.0 mm). Median computation time per frame was 179 msec (range, 154–247 msec). Conclusion: AKAZE successfully and quickly detected the target position on kV X-ray fluoroscopic images. Initial results indicate that the differences between 3D tracking and target position would be clinically acceptable.« less

  13. Evaluation of Dose Uncertainty to the Target Associated With Real-Time Tracking Intensity-Modulated Radiation Therapy Using the CyberKnife Synchrony System.

    PubMed

    Iwata, Hiromitsu; Inoue, Mitsuhiro; Shiomi, Hiroya; Murai, Taro; Tatewaki, Koshi; Ohta, Seiji; Okawa, Kohei; Yokota, Naoki; Shibamoto, Yuta

    2016-02-01

    We investigated the dose uncertainty caused by errors in real-time tracking intensity-modulated radiation therapy (IMRT) using the CyberKnife Synchrony Respiratory Tracking System (SRTS). Twenty lung tumors that had been treated with non-IMRT real-time tracking using CyberKnife SRTS were used for this study. After validating the tracking error in each case, we did 40 IMRT planning using 8 different collimator sizes for the 20 patients. The collimator size was determined for each planning target volume (PTV); smaller ones were one-half, and larger ones three-quarters, of the PTV diameter. The planned dose was 45 Gy in 4 fractions prescribed at 95% volume border of the PTV. Thereafter, the tracking error in each case was substituted into calculation software developed in house and randomly added in the setting of each beam. The IMRT planning incorporating tracking errors was simulated 1000 times, and various dose data on the clinical target volume (CTV) were compared with the original data. The same simulation was carried out by changing the fraction number from 1 to 6 in each IMRT plan. Finally, a total of 240 000 plans were analyzed. With 4 fractions, the change in the CTV maximum and minimum doses was within 3.0% (median) for each collimator. The change in D99 and D95 was within 2.0%. With decreases in the fraction number, the CTV coverage rate and the minimum dose decreased and varied greatly. The accuracy of real-time tracking IMRT delivered in 4 fractions using CyberKnife SRTS was considered to be clinically acceptable. © The Author(s) 2014.

  14. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy.

    PubMed

    Via, Riccardo; Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Riboldi, Marco; Ciocca, Mario; Orecchia, Roberto; Baroni, Guido

    2015-05-01

    External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The

  15. SU-F-T-41: 3D MTP-TRUS for Prostate Implant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, P

    Purpose: Prostate brachytherapy is an effective treatment for early prostate cancer. The current prostate implant is limited to using 2D transrectal ultrassound (TRUS) or machenical motor driven 2D array either in the end or on the side. Real-time 3D images can improve the accuracy of the guidance of prostate implant. The concept of our system is to allow realtime full visualization of the entire prostate with the multiple transverse scan. Methods: The prototype of 3D Multiple-Transverse-Plane Transrectal Ultrasound probe (MTP-TRUS) has been designed by us and manufactured by Blatek inc. It has 7 convex linear arrays and each array hasmore » 96 elements. It is connected to cQuest Fire bird research system (Cephasonics inc.) which is a flexible and configurable ultrasound-development platform. The size of cQuest Firebird system is compact and supports the real-time wireless image transferring. A relay based mux board is designed for the cQuest Firebird system to be able to connect 672 elements. Results: The center frequency of probe is 6MHz±10%. The diameter of probe is 3cm and the length is 20cm. The element pitch is 0.205 mm. Array focus is 30mm and spacing 1.6cm. The beam data for each array was measured and met our expectation. The interface board of MTP-TURS is made and able to connect to cQuest Firebird system. The image display interface is still under the development. Our real-time needle tracking algorithm will be implemented too. Conclusion: Our MTP-TRUS system for prostate implant will be able to acquire real-time 3D images of prostate and do the real-time needle segmentation and tracking. The system is compact and have wireless function.« less

  16. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    PubMed

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  17. A real-time sub-μrad laser beam tracking system

    NASA Astrophysics Data System (ADS)

    Buske, Ivo; Schragner, Ralph; Riede, Wolfgang

    2007-10-01

    We present a rugged and reliable real-time laser beam tracking system operating with a high speed, high resolution piezo-electric tip/tilt mirror. Characteristics of the piezo mirror and position sensor are investigated. An industrial programmable automation controller is used to develop a real-time digital PID controller. The controller provides a one million field programmable gate array (FPGA) to realize a high closed-loop frequency of 50 kHz. Beam tracking with a root-mean-squared accuracy better than 0.15 μrad has been laboratory confirmed. The system is intended as an add-on module for established mechanical mrad tracking systems.

  18. Real-time ultrasound-tagging to track the 2D motion of the common carotid artery wall in vivo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahnd, Guillaume, E-mail: g.zahnd@erasmusmc.nl; Salles, Sébastien; Liebgott, Hervé

    2015-02-15

    Purpose: Tracking the motion of biological tissues represents an important issue in the field of medical ultrasound imaging. However, the longitudinal component of the motion (i.e., perpendicular to the beam axis) remains more challenging to extract due to the rather coarse resolution cell of ultrasound scanners along this direction. The aim of this study is to introduce a real-time beamforming strategy dedicated to acquire tagged images featuring a distinct pattern in the objective to ease the tracking. Methods: Under the conditions of the Fraunhofer approximation, a specific apodization function was applied to the received raw channel data, in real-time duringmore » image acquisition, in order to introduce a periodic oscillations pattern along the longitudinal direction of the radio frequency signal. Analytic signals were then extracted from the tagged images, and subpixel motion tracking of the intima–media complex was subsequently performed offline, by means of a previously introduced bidimensional analytic phase-based estimator. Results: The authors’ framework was applied in vivo on the common carotid artery from 20 young healthy volunteers and 6 elderly patients with high atherosclerosis risk. Cine-loops of tagged images were acquired during three cardiac cycles. Evaluated against reference trajectories manually generated by three experienced analysts, the mean absolute tracking error was 98 ± 84 μm and 55 ± 44 μm in the longitudinal and axial directions, respectively. These errors corresponded to 28% ± 23% and 13% ± 9% of the longitudinal and axial amplitude of the assessed motion, respectively. Conclusions: The proposed framework enables tagged ultrasound images of in vivo tissues to be acquired in real-time. Such unconventional beamforming strategy contributes to improve tracking accuracy and could potentially benefit to the interpretation and diagnosis of biomedical images.« less

  19. Real-time x-ray fluoroscopy-based catheter detection and tracking for cardiac electrophysiology interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Yingliang; Housden, R. James; Razavi, Reza

    2013-07-15

    Purpose: X-ray fluoroscopically guided cardiac electrophysiology (EP) procedures are commonly carried out to treat patients with arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of a three-dimensional (3D) roadmap derived from preprocedural volumetric images can be used to add anatomical information. It is useful to know the position of the catheter electrodes relative to the cardiac anatomy, for example, to record ablation therapy locations during atrial fibrillation therapy. Also, the electrode positions of the coronary sinus (CS) catheter or lasso catheter can be used for road map motion correction.Methods: In this paper, the authors presentmore » a novel unified computational framework for image-based catheter detection and tracking without any user interaction. The proposed framework includes fast blob detection, shape-constrained searching and model-based detection. In addition, catheter tracking methods were designed based on the customized catheter models input from the detection method. Three real-time detection and tracking methods are derived from the computational framework to detect or track the three most common types of catheters in EP procedures: the ablation catheter, the CS catheter, and the lasso catheter. Since the proposed methods use the same blob detection method to extract key information from x-ray images, the ablation, CS, and lasso catheters can be detected and tracked simultaneously in real-time.Results: The catheter detection methods were tested on 105 different clinical fluoroscopy sequences taken from 31 clinical procedures. Two-dimensional (2D) detection errors of 0.50 {+-} 0.29, 0.92 {+-} 0.61, and 0.63 {+-} 0.45 mm as well as success rates of 99.4%, 97.2%, and 88.9% were achieved for the CS catheter, ablation catheter, and lasso catheter, respectively. With the tracking method, accuracies were increased to 0.45 {+-} 0.28, 0.64 {+-} 0.37, and 0.53 {+-} 0.38 mm and success rates increased to

  20. Real-Time 3d Reconstruction from Images Taken from AN Uav

    NASA Astrophysics Data System (ADS)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  1. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    NASA Astrophysics Data System (ADS)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  2. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  3. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  4. A CNN Regression Approach for Real-Time 2D/3D Registration.

    PubMed

    Shun Miao; Wang, Z Jane; Rui Liao

    2016-05-01

    In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D/3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D/3-D registration with a significantly enlarged capture range when compared to intensity-based methods.

  5. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  6. Real-time acquisition and tracking system with multiple Kalman filters

    NASA Astrophysics Data System (ADS)

    Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.

    1994-07-01

    The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.

  7. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  8. 3D Tracking via Shoe Sensing.

    PubMed

    Li, Fangmin; Liu, Guo; Liu, Jian; Chen, Xiaochuang; Ma, Xiaolin

    2016-10-28

    Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices' random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy.

  9. 3D ocular ultrasound using gaze tracking on the contralateral eye: a feasibility study.

    PubMed

    Afsham, Narges; Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert

    2011-01-01

    A gaze-deviated examination of the eye with a 2D ultrasound transducer is a common and informative ophthalmic test; however, the complex task of the pose estimation of the ultrasound images relative to the eye affects 3D interpretation. To tackle this challenge, a novel system for 3D image reconstruction based on gaze tracking of the contralateral eye has been proposed. The gaze fixates on several target points and, for each fixation, the pose of the examined eye is inferred from the gaze tracking. A single camera system has been developed for pose estimation combined with subject-specific parameter identification. The ultrasound images are then transformed to the coordinate system of the examined eye to create a 3D volume. Accuracy of the proposed gaze tracking system and the pose estimation of the eye have been validated in a set of experiments. Overall system error, including pose estimation and calibration, are 3.12 mm and 4.68 degrees.

  10. Dynamic electronic collimation method for 3-D catheter tracking on a scanning-beam digital x-ray system

    PubMed Central

    Dunkerley, David A. P.; Slagowski, Jordan M.; Funk, Tobias; Speidel, Michael A.

    2017-01-01

    Abstract. Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3-D catheter tracking. This work proposes a method of dose-reduced 3-D catheter tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. This is achieved through the selective deactivation of focal spot positions not needed for the catheter tracking task. The technique was retrospectively evaluated with SBDX detector data recorded during a phantom study. DEC imaging of a catheter tip at isocenter required 340 active focal spots per frame versus 4473 spots in full field-of-view (FOV) mode. The dose-area product (DAP) and peak skin dose (PSD) for DEC versus full FOV scanning were calculated using an SBDX Monte Carlo simulation code. The average DAP was reduced to 7.8% of the full FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full FOV value. The root-mean-squared-deviation between DEC-based 3-D tracking coordinates and full FOV 3-D tracking coordinates was less than 0.1 mm. The 3-D distance between the tracked tip and the sheath centerline averaged 0.75 mm. DEC is a feasible method for dose reduction during SBDX 3-D catheter tracking. PMID:28439521

  11. A low cost real-time motion tracking approach using webcam technology.

    PubMed

    Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh

    2015-02-05

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. A low cost real-time motion tracking approach using webcam technology

    PubMed Central

    Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh

    2014-01-01

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306

  13. Ground-based real-time tracking and traverse recovery of China's first lunar rover

    NASA Astrophysics Data System (ADS)

    Zhou, Huan; Li, Haitao; Xu, Dezhen; Dong, Guangliang

    2016-02-01

    The Chang'E-3 unmanned lunar exploration mission forms an important stage in China's Lunar Exploration Program. China's first lunar rover "Yutu" is a sub-probe of the Chang'E-3 mission. Its main science objectives cover the investigations of the lunar soil and crust structure, explorations of mineral resources, and analyses of matter compositions. Some of these tasks require accurate real-time and continuous position tracking of the rover. To achieve these goals with the scale-limited Chinese observation network, this study proposed a ground-based real-time very long baseline interferometry phase referencing tracking method. We choose the Chang'E-3 lander as the phase reference source, and the accurate location of the rover is updated every 10 s using its radio-image sequences with the help of a priori information. The detailed movements of the Yutu rover have been captured with a sensitivity of several centimeters, and its traverse across the lunar surface during the first few days after its separation from the Chang'E-3 lander has been recovered. Comparisons and analysis show that the position tracking accuracy reaches a 1-m level.

  14. Dense-HOG-based drift-reduced 3D face tracking for infant pain monitoring

    NASA Astrophysics Data System (ADS)

    Saeijs, Ronald W. J. J.; Tjon A Ten, Walther E.; de With, Peter H. N.

    2017-03-01

    This paper presents a new algorithm for 3D face tracking intended for clinical infant pain monitoring. The algorithm uses a cylinder head model and 3D head pose recovery by alignment of dynamically extracted templates based on dense-HOG features. The algorithm includes extensions for drift reduction, using re-registration in combination with multi-pose state estimation by means of a square-root unscented Kalman filter. The paper reports experimental results on videos of moving infants in hospital who are relaxed or in pain. Results show good tracking behavior for poses up to 50 degrees from upright-frontal. In terms of eye location error relative to inter-ocular distance, the mean tracking error is below 9%.

  15. Real-time WAMI streaming target tracking in fog

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Blasch, Erik; Chen, Ning; Deng, Anna; Ling, Haibin; Chen, Genshe

    2016-05-01

    Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and Text data is highly desired for many mission critical emergency or security applications. Cloud Computing has been considered promising to achieve big data integration from multi-modal sources. In many mission critical tasks, however, powerful Cloud technology cannot satisfy the tight latency tolerance as the servers are allocated far from the sensing platform, actually there is no guaranteed connection in the emergency situations. Therefore, data processing, information fusion, and decision making are required to be executed on-site (i.e., near the data collection). Fog Computing, a recently proposed extension and complement for Cloud Computing, enables computing on-site without outsourcing jobs to a remote Cloud. In this work, we have investigated the feasibility of processing streaming WAMI in the Fog for real-time, online, uninterrupted target tracking. Using a single target tracking algorithm, we studied the performance of a Fog Computing prototype. The experimental results are very encouraging that validated the effectiveness of our Fog approach to achieve real-time frame rates.

  16. Early results of MitraClip system implantation by real-time three-dimensional speckle-tracking left ventricle analysis.

    PubMed

    Scandura, Salvatore; Dipasqua, Fabio; Gargiulo, Giuseppe; Capodanno, Davide; Caggegi, Anna; Grasso, Carmelo; Mangiafico, Sarah; Pistritto, Anna Maria; Immè, Sebastiano; Chiarandà, Marta; Ministeri, Margherita; Ronsivalle, Giuseppe; Cannata, Stefano; Arcidiacono, Antonio Andrea; Capranzano, Piera; Tamburino, Corrado

    2016-11-01

    To appraise the early effect of percutaneous mitral valve repair with the MitraClip system on myocardial function using real-time three-dimensional speckle-tracking echocardiography (3D-STE). Consecutive patients with moderate-to-severe or severe mitral regurgitation, undergoing mitral valve repair with the MitraClip system, were prospectively evaluated during the peri-procedural workout and follow-up. Left ventricular deformation was evaluated by a two-dimensional and 3D speckle-tracking analysis. 3D-STE acquisitions were elaborated obtaining real-time 3D global longitudinal strain evaluation, and by appraising both volumetric and hemodynamic parameters (i.e. left ventricular end-diastolic volume, left ventricular end-systolic volume, left ventricular ejection fraction, cardiac output, and stroke volume). In all, 30 patients were included. At 1-month follow-up, 3D-STE analysis revealed no changes in left ventricular end-diastolic volume (162.6 ± 73.7 ml at baseline vs. 159.8 ± 64.5 ml at 1-month follow-up; P = 0.63) and a downward trend in left ventricular end-systolic volume (104.7 ± 52.0 vs. 100.1 ± 50.4 ml, respectively; P = 0.06). Left ventricular ejection fraction did not significantly increase (38.1 ± 11.3% at baseline vs. 39.4 ± 11.0% at 1-month follow-up; P = 0.20). No significant changes were reported in cardiac output (4.3 ± 2.0 l/min at baseline vs. 4.0 ± 1.5 l/min at follow-up; P = 0.377) and in stroke volume (59.5 ± 25.5 ml at baseline vs. 59.9 ± 20.7 ml at follow-up; P = 0.867). On the contrary, left ventricular deformation capability significantly improved, with the real-time 3D global longitudinal strain value changing from -9.8 ± 4.1% at baseline to -11.0 ± 4.4% at follow-up (P = 0.018). Accurately assessing myocardial function by the use of 3D-STE, this study reported irrelevant early changes in left ventricular size, but a positive effect on left

  17. Clinical feasibility and validation of 3D principal strain analysis from cine MRI: comparison to 2D strain by MRI and 3D speckle tracking echocardiography.

    PubMed

    Satriano, Alessandro; Heydari, Bobak; Narous, Mariam; Exner, Derek V; Mikami, Yoko; Attwood, Monica M; Tyberg, John V; Lydell, Carmen P; Howarth, Andrew G; Fine, Nowell M; White, James A

    2017-12-01

    Two-dimensional (2D) strain analysis is constrained by geometry-dependent reference directions of deformation (i.e. radial, circumferential, and longitudinal) following the assumption of cylindrical chamber architecture. Three-dimensional (3D) principal strain analysis may overcome such limitations by referencing intrinsic (i.e. principal) directions of deformation. This study aimed to demonstrate clinical feasibility of 3D principal strain analysis from routine 2D cine MRI with validation to strain from 2D tagged cine analysis and 3D speckle tracking echocardiography. Thirty-one patients undergoing cardiac MRI were studied. 3D strain was measured from routine, multi-planar 2D cine SSFP images using custom software designed to apply 4D deformation fields to 3D cardiac models to derive principal strain. Comparisons of strain estimates versus those by 2D tagged cine, 2D non-tagged cine (feature tracking), and 3D speckle tracking echocardiography (STE) were performed. Mean age was 51 ± 14 (36% female). Mean LV ejection fraction was 66 ± 10% (range 37-80%). 3D principal strain analysis was feasible in all subjects and showed high inter- and intra-observer reproducibility (ICC range 0.83-0.97 and 0.83-0.98, respectively-p < 0.001 for all directions). Strong correlations of minimum and maximum principal strain were respectively observed versus the following: 3D STE estimates of longitudinal (r = 0.81 and r = -0.64), circumferential (r = 0.76 and r = -0.58) and radial (r = -0.80 and r = 0.63) strain (p < 0.001 for all); 2D tagged cine estimates of longitudinal (r = 0.81 and r = -0.81), circumferential (r = 0.87 and r = -0.85), and radial (r = -0.76 and r = 0.81) strain (p < 0.0001 for all); and 2D cine (feature tracking) estimates of longitudinal (r = 0.85 and -0.83), circumferential (r = 0.88 and r = -0.87), and radial strain (r = -0.79 and r = 0.84, p < 0.0001 for all). 3D

  18. In Situ Electrochemical Sensing and Real-Time Monitoring Live Cells Based on Freestanding Nanohybrid Paper Electrode Assembled from 3D Functionalized Graphene Framework.

    PubMed

    Zhang, Yan; Xiao, Jian; Lv, Qiying; Wang, Lu; Dong, Xulin; Asif, Muhammad; Ren, Jinghua; He, Wenshan; Sun, Yimin; Xiao, Fei; Wang, Shuai

    2017-11-08

    In this work, we develop a new type of freestanding nanohybrid paper electrode assembled from 3D ionic liquid (IL) functionalized graphene framework (GF) decorated by gold nanoflowers (AuNFs), and explore its practical application in in situ electrochemical sensing of live breast cell samples by real-time tracking biomarker H 2 O 2 released from cells. The AuNFs modified IL functionalized GF (AuNFs/IL-GF) was synthesized via a facile and efficient dopamine-assisted one-pot self-assembly strategy. The as-obtained nanohybrid assembly exhibits a typical 3D hierarchical porous structure, where the highly active electrocatalyst AuNFs are well dispersed on IL-GF scaffold. And the graft of hydrophilic IL molecules (i.e., 1-butyl-3-methylimidazolium tetrafluoroborate, BMIMBF 4 ) on graphene nanosheets not only avoids their agglomeration and disorder stacking during the self-assembly but also endows the integrated IL-GF monolithic material with unique hydrophilic properties, which enables it to be readily dispersed in aqueous solution and processed into freestanding paperlike material. Because of the unique structural properties and the combinational advantages of different components in the AuNFs/IL-GF composite, the resultant nanohybrid paper electrode exhibits good nonenzymatic electrochemical sensing performance toward H 2 O 2 . When used in real-time tracking H 2 O 2 secreted from different breast cells attached to the paper electrode without or with radiotherapy treatment, the proposed electrochemical sensor based on freestanding AuNFs/IL-GF paper electrode can distinguish the normal breast cell HBL-100 from the cancer breast cells MDA-MB-231 and MCF-7 cells, and assess the radiotherapy effects to different breast cancer cells, which opens a new horizon in real-time monitoring cancer cells by electrochemical sensing platform.

  19. Compact 3D Camera for Shake-the-Box Particle Tracking

    NASA Astrophysics Data System (ADS)

    Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan

    2017-11-01

    Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.

  20. 3D Tracking of Diatom Motion in Turbulent Flow

    NASA Astrophysics Data System (ADS)

    Variano, E. A.; Brandt, L.; Sardina, G.; Ardekani, M.; Pujara, N.; Ayers, S.; Du Clos, K.; Karp-Boss, L.; Jumars, P. A.

    2016-02-01

    We present laboratory measurements of single-celled and chain forming diatom motion in a stirred turbulence tank. The overarching goal is to explore whether diatoms track flow with fidelity (passive tracers) or whether interactions with cell density and shape result in biased trajectories that alter settling velocities. Diatom trajectories are recorded in 3D using a stereoscopic, calibrated tracking tool. Turbulence is created in a novel stirred tank, designed to create motions that match those found in the ocean surface mixed layer at scales less than 10 cm. The data are analyzed for evidence of enhanced particle clustering, an indicator of turbulently altered settling rates

  1. Free-breathing cardiac MR stress perfusion with real-time slice tracking.

    PubMed

    Basha, Tamer A; Roujol, Sébastien; Kissinger, Kraig V; Goddu, Beth; Berg, Sophie; Manning, Warren J; Nezafat, Reza

    2014-09-01

    To develop a free-breathing cardiac MR perfusion sequence with slice tracking for use after physical exercise. We propose to use a leading navigator, placed immediately before each 2D slice acquisition, for tracking the respiratory motion and updating the slice location in real-time. The proposed sequence was used to acquire CMR perfusion datasets in 12 healthy adult subjects and 8 patients. Images were compared with the conventional perfusion (i.e., without slice tracking) results from the same subjects. The location and geometry of the myocardium were quantitatively analyzed, and the perfusion signal curves were calculated from both sequences to show the efficacy of the proposed sequence. The proposed sequence was significantly better compared with the conventional perfusion sequence in terms of qualitative image scores. Changes in the myocardial location and geometry decreased by 50% in the slice tracking sequence. Furthermore, the proposed sequence had signal curves that are smoother and less noisy. The proposed sequence significantly reduces the effect of the respiratory motion on the image acquisition in both rest and stress perfusion scans. Copyright © 2013 Wiley Periodicals, Inc.

  2. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  3. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  4. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  5. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Via, Riccardo, E-mail: riccardo.via@polimi.it; Fassi, Aurora; Fattori, Giovanni

    Purpose: External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Methods: Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by twomore » calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Results: Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. Conclusions: A noninvasive ETS prototype was designed to perform real-time target localization and eye movement

  6. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    PubMed

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  7. An efficient sequential approach to tracking multiple objects through crowds for real-time intelligent CCTV systems.

    PubMed

    Li, Liyuan; Huang, Weimin; Gu, Irene Yu-Hua; Luo, Ruijiang; Tian, Qi

    2008-10-01

    Efficiency and robustness are the two most important issues for multiobject tracking algorithms in real-time intelligent video surveillance systems. We propose a novel 2.5-D approach to real-time multiobject tracking in crowds, which is formulated as a maximum a posteriori estimation problem and is approximated through an assignment step and a location step. Observing that the occluding object is usually less affected by the occluded objects, sequential solutions for the assignment and the location are derived. A novel dominant color histogram (DCH) is proposed as an efficient object model. The DCH can be regarded as a generalized color histogram, where dominant colors are selected based on a given distance measure. Comparing with conventional color histograms, the DCH only requires a few color components (31 on average). Furthermore, our theoretical analysis and evaluation on real data have shown that DCHs are robust to illumination changes. Using the DCH, efficient implementations of sequential solutions for the assignment and location steps are proposed. The assignment step includes the estimation of the depth order for the objects in a dispersing group, one-by-one assignment, and feature exclusion from the group representation. The location step includes the depth-order estimation for the objects in a new group, the two-phase mean-shift location, and the exclusion of tracked objects from the new position in the group. Multiobject tracking results and evaluation from public data sets are presented. Experiments on image sequences captured from crowded public environments have shown good tracking results, where about 90% of the objects have been successfully tracked with the correct identification numbers by the proposed method. Our results and evaluation have indicated that the method is efficient and robust for tracking multiple objects (>or= 3) in complex occlusion for real-world surveillance scenarios.

  8. 3D Tracking of individual growth factor receptors on polarized cells

    NASA Astrophysics Data System (ADS)

    Werner, James; Stich, Dominik; Cleyrat, Cedric; Phipps, Mary; Wadinger-Ness, Angela; Wilson, Bridget

    We have been developing methods for following 3D motion of selected biomolecular species throughout mammalian cells. Our approach exploits a custom designed confocal microscope that uses a unique spatial filter geometry and active feedback 200 times/second to follow fast 3D motion. By exploiting new non-blinking quantum dots as fluorescence labels, individual molecular trajectories can be observed for several minutes. We also will discuss recent instrument upgrades, including the ability to perform spinning disk fluorescence microscopy on the whole mammalian cell performed simultaneously with 3D molecular tracking experiments. These instrument upgrades were used to quantify 3D heterogeneous transport of individual growth factor receptors (EGFR) on live human renal cortical epithelial cells.

  9. Real-time tracking of respiratory-induced tumor motion by dose-rate regulation

    NASA Astrophysics Data System (ADS)

    Han-Oh, Yeonju Sarah

    We have developed a novel real-time tumor-tracking technology, called Dose-Rate-Regulated Tracking (DRRT), to compensate for tumor motion caused by breathing. Unlike other previously proposed tumor-tracking methods, this new method uses a preprogrammed dynamic multileaf collimator (MLC) sequence in combination with real-time dose-rate control. This new scheme circumvents the technical challenge in MLC-based tumor tracking, that is to control the MLC motion in real time, based on real-time detected tumor motion. The preprogrammed MLC sequence describes the movement of the tumor, as a function of breathing phase, amplitude, or tidal volume. The irregularity of tumor motion during treatment is handled by real-time regulation of the dose rate, which effectively speeds up or slows down the delivery of radiation as needed. This method is based on the fact that all of the parameters in dynamic radiation delivery, including MLC motion, are enslaved to the cumulative dose, which, in turn, can be accelerated or decelerated by varying the dose rate. Because commercially available MLC systems do not allow the MLC delivery sequence to be modified in real time based on the patient's breathing signal, previously proposed tumor-tracking techniques using a MLC cannot be readily implemented in the clinic today. By using a preprogrammed MLC sequence to handle the required motion, the task for real-time control is greatly simplified. We have developed and tested the pre- programmed MLC sequence and the dose-rate regulation algorithm using lung-cancer patients breathing signals. It has been shown that DRRT can track the tumor with an accuracy of less than 2 mm for a latency of the DRRT system of less than 0.35 s. We also have evaluated the usefulness of guided breathing for DRRT. Since DRRT by its very nature can compensate for breathing-period changes, guided breathing was shown to be unnecessary for real-time tracking when using DRRT. Finally, DRRT uses the existing dose-rate control

  10. Online dose reconstruction for tracked volumetric arc therapy: Real-time implementation and offline quality assurance for prostate SBRT.

    PubMed

    Kamerling, Cornelis Ph; Fast, Martin F; Ziegenhein, Peter; Menten, Martin J; Nill, Simeon; Oelfke, Uwe

    2017-11-01

    Firstly, this study provides a real-time implementation of online dose reconstruction for tracked volumetric arc therapy (VMAT). Secondly, this study describes a novel offline quality assurance tool, based on commercial dose calculation algorithms. Online dose reconstruction for VMAT is a computationally challenging task in terms of computer memory usage and calculation speed. To potentially reduce the amount of memory used, we analyzed the impact of beam angle sampling for dose calculation on the accuracy of the dose distribution. To establish the performance of the method, we planned two single-arc VMAT prostate stereotactic body radiation therapy cases for delivery with dynamic MLC tracking. For quality assurance of our online dose reconstruction method we have also developed a stand-alone offline dose reconstruction tool, which utilizes the RayStation treatment planning system to calculate dose. For the online reconstructed dose distributions of the tracked deliveries, we could establish strong resemblance for 72 and 36 beam co-planar equidistant beam samples with less than 1.2% deviation for the assessed dose-volume indicators (clinical target volume D98 and D2, and rectum D2). We could achieve average runtimes of 28-31 ms per reported MLC aperture for both dose computation and accumulation, meeting our real-time requirement. To cross-validate the offline tool, we have compared the planned dose to the offline reconstructed dose for static deliveries and found excellent agreement (3%/3 mm global gamma passing rates of 99.8%-100%). Being able to reconstruct dose during delivery enables online quality assurance and online replanning strategies for VMAT. The offline quality assurance tool provides the means to validate novel online dose reconstruction applications using a commercial dose calculation engine. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  11. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.

  12. SU-G-BRA-17: Tracking Multiple Targets with Independent Motion in Real-Time Using a Multi-Leaf Collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Y; Keall, P; Poulsen, P

    Purpose: Multiple targets with large intrafraction independent motion are often involved in advanced prostate, lung, abdominal, and head and neck cancer radiotherapy. Current standard of care treats these with the originally planned fields, jeopardizing the treatment outcomes. A real-time multi-leaf collimator (MLC) tracking method has been developed to address this problem for the first time. This study evaluates the geometric uncertainty of the multi-target tracking method. Methods: Four treatment scenarios are simulated based on a prostate IMAT plan to treat a moving prostate target and static pelvic node target: 1) real-time multi-target MLC tracking; 2) real-time prostate-only MLC tracking; 3)more » correcting for prostate interfraction motion at setup only; and 4) no motion correction. The geometric uncertainty of the treatment is assessed by the sum of the erroneously underexposed target area and overexposed healthy tissue areas for each individual target. Two patient-measured prostate trajectories of average 2 and 5 mm motion magnitude are used for simulations. Results: Real-time multi-target tracking accumulates the least uncertainty overall. As expected, it covers the static nodes similarly well as no motion correction treatment and covers the moving prostate similarly well as the real-time prostate-only tracking. Multi-target tracking reduces >90% of uncertainty for the static nodal target compared to the real-time prostate-only tracking or interfraction motion correction. For prostate target, depending on the motion trajectory which affects the uncertainty due to leaf-fitting, multi-target tracking may or may not perform better than correcting for interfraction prostate motion by shifting patient at setup, but it reduces ∼50% of uncertainty compared to no motion correction. Conclusion: The developed real-time multi-target MLC tracking can adapt for the independently moving targets better than other available treatment adaptations. This will enable

  13. Close to real-time robust pedestrian detection and tracking

    NASA Astrophysics Data System (ADS)

    Lipetski, Y.; Loibner, G.; Sidla, O.

    2015-03-01

    Fully automated video based pedestrian detection and tracking is a challenging task with many practical and important applications. We present our work aimed to allow robust and simultaneously close to real-time tracking of pedestrians. The presented approach is stable to occlusions, lighting conditions and is generalized to be applied on arbitrary video data. The core tracking approach is built upon tracking-by-detections principle. We describe our cascaded HOG detector with successive CNN verification in detail. For the tracking and re-identification task, we did an extensive analysis of appearance based features as well as their combinations. The tracker was tested on many hours of video data for different scenarios; the results are presented and discussed.

  14. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  15. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  16. 2D array transducers for real-time 3D ultrasound guidance of interventional devices

    NASA Astrophysics Data System (ADS)

    Light, Edward D.; Smith, Stephen W.

    2009-02-01

    We describe catheter ring arrays for real-time 3D ultrasound guidance of devices such as vascular grafts, heart valves and vena cava filters. We have constructed several prototypes operating at 5 MHz and consisting of 54 elements using the W.L. Gore & Associates, Inc. micro-miniature ribbon cables. We have recently constructed a new transducer using a braided wiring technology from Precision Interconnect. This transducer consists of 54 elements at 4.8 MHz with pitch of 0.20 mm and typical -6 dB bandwidth of 22%. In all cases, the transducer and wiring assembly were integrated with an 11 French catheter of a Cook Medical deployment device for vena cava filters. Preliminary in vivo and in vitro testing is ongoing including simultaneous 3D ultrasound and x-ray fluoroscopy.

  17. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

  18. On the holographic 3D tracking of in vitro cells characterized by a highly-morphological change.

    PubMed

    Memmolo, Pasquale; Iannone, Maria; Ventre, Maurizio; Netti, Paolo Antonio; Finizio, Andrea; Paturzo, Melania; Ferraro, Pietro

    2012-12-17

    Digital Holography (DH) in microscopic configuration is a powerful tool for the imaging of micro-objects contained into a three dimensional (3D) volume, by a single-shot image acquisition. Many studies report on the ability of DH to track particle, microorganism and cells in 3D. However, very few investigations are performed with objects that change severely their morphology during the observation period. Here we study DH as a tool for 3D tracking an osteosarcoma cell line for which extensive changes in cell morphology are associated to cell motion. Due to the great unpredictable morphological change, retrieving cell's position in 3D can become a complicated issue. We investigate and discuss in this paper how the tridimensional position can be affected by the continuous change of the cells. Moreover we propose and test some strategies to afford the problems and compare it with others approaches. Finally, results on the 3D tracking and comments are reported and illustrated.

  19. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    NASA Astrophysics Data System (ADS)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  20. 3D cardiac μ tissues within a microfluidic device with real-time contractile stress readout

    PubMed Central

    Aung, Aereas; Bhullar, Ivneet Singh; Theprungsirikul, Jomkuan; Davey, Shruti Krishna; Lim, Han Liang; Chiu, Yu-Jui; Ma, Xuanyi; Dewan, Sukriti; Lo, Yu-Hwa; McCulloch, Andrew; Varghese, Shyni

    2015-01-01

    We present the development of three-dimensional (3D) cardiac microtissues within a microfluidic device with the ability to quantify real-time contractile stress measurements in situ. Using a 3D patterning technology that allows for the precise spatial distribution of cells within the device, we created an array of 3D cardiac microtissues from neonatal mouse cardiomyocytes. We integrated the 3D micropatterning technology with microfluidics to achieve perfused cell-laden structures. The cells were encapsulated within a degradable gelatin methacrylate hydrogel, which was sandwiched between two polyacrylamide hydrogels. The polyacrylamide hydrogels were used as “stress sensors” to acquire the contractile stresses generated by the beating cardiac cells. The cardiac-specific response of the engineered 3D system was examined by exposing it to epinephrine, an adrenergic neurotransmitter known to increase the magnitude and frequency of cardiac contractions. In response to exogenous epinephrine the engineered cardiac tissues exhibited an increased beating frequency and stress magnitude. Such cost-effective and easy-to-adapt 3D cardiac systems with real-time functional readout could be an attractive technological platform for drug discovery and development. PMID:26588203

  1. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  2. MO-FG-CAMPUS-JeP3-04: Feasibility Study of Real-Time Ultrasound Monitoring for Abdominal Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Lin; Kien Ng, Sook; Zhang, Ying

    Purpose: Ultrasound is ideal for real-time monitoring in radiotherapy with high soft tissue contrast, non-ionization, portability, and cost effectiveness. Few studies investigated clinical application of real-time ultrasound monitoring for abdominal stereotactic body radiation therapy (SBRT). This study aims to demonstrate the feasibility of real-time monitoring of 3D target motion using 4D ultrasound. Methods: An ultrasound probe holding system was designed to allow clinician to freely move and lock ultrasound probe. For phantom study, an abdominal ultrasound phantom was secured on a 2D programmable respiratory motion stage. One side of the stage was elevated than another side to generate 3D motion.more » The motion stage made periodic breath-hold movement. Phantom movement tracked by infrared camera was considered as ground truth. For volunteer study three healthy subjects underwent the same setup for abdominal SBRT with active breath control (ABC). 4D ultrasound B-mode images were acquired for both phantom and volunteers for real-time monitoring. 10 breath-hold cycles were monitored for each experiment. For phantom, the target motion tracked by ultrasound was compared with motion tracked by infrared camera. For healthy volunteers, the reproducibility of ABC breath-hold was evaluated. Results: Volunteer study showed the ultrasound system fitted well to the clinical SBRT setup. The reproducibility for 10 breath-holds is less than 2 mm in three directions for all three volunteers. For phantom study the motion between inspiration and expiration captured by camera (ground truth) is 2.35±0.02 mm, 1.28±0.04 mm, 8.85±0.03 mm in LR, AP, SI directly, respectively. The motion monitored by ultrasound is 2.21±0.07 mm, 1.32±0.12mm, 9.10±0.08mm, respectively. The motion monitoring error in any direction is less than 0.5 mm. Conclusion: The volunteer study proved the clinical feasibility of real-time ultrasound monitoring for abdominal SBRT. The phantom and volunteer

  3. A biplanar X-ray approach for studying the 3D dynamics of human track formation.

    PubMed

    Hatala, Kevin G; Perry, David A; Gatesy, Stephen M

    2018-05-09

    Recent discoveries have made hominin tracks an increasingly prevalent component of the human fossil record, and these data have the capacity to inform long-standing debates regarding the biomechanics of hominin locomotion. However, there is currently no consensus on how to decipher biomechanical variables from hominin tracks. These debates can be linked to our generally limited understanding of the complex interactions between anatomy, motion, and substrate that give rise to track morphology. These interactions are difficult to study because direct visualization of the track formation process is impeded by foot and substrate opacity. To address these obstacles, we developed biplanar X-ray and computer animation methods, derived from X-ray Reconstruction of Moving Morphology (XROMM), to analyze the 3D dynamics of three human subjects' feet as they walked across four substrates (three deformable muds and rigid composite panel). By imaging and reconstructing 3D positions of external markers, we quantified the 3D dynamics at the foot-substrate interface. Foot shape, specifically heel and medial longitudinal arch deformation, was significantly affected by substrate rigidity. In deformable muds, we found that depths measured across tracks did not directly reflect the motions of the corresponding regions of the foot, and that track outlines were not perfectly representative of foot size. These results highlight the complex, dynamic nature of track formation, and the experimental methods presented here offer a promising avenue for developing and refining methods for accurately inferring foot anatomy and gait biomechanics from fossil hominin tracks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Stat-tracks and mediotypes: powerful tools for modern ichnology based on 3D models

    PubMed Central

    Bennett, Matthew R.; Marty, Daniel; Budka, Marcin; Reynolds, Sally C.; Bakirov, Rashid

    2018-01-01

    Vertebrate tracks are subject to a wide distribution of morphological types. A single trackmaker may be associated with a range of tracks reflecting individual pedal anatomy and behavioural kinematics mediated through substrate properties which may vary both in space and time. Accordingly, the same trackmaker can leave substantially different morphotypes something which must be considered in creating ichnotaxa. In modern practice this is often captured by the collection of a series of 3D track models. We introduce two concepts to help integrate these 3D models into ichnological analysis procedures. The mediotype is based on the idea of using statistically-generated three-dimensional track models (median or mean) of the type specimens to create a composite track to support formal recognition of a ichno type. A representative track (mean and/or median) is created from a set of individual reference tracks or from multiple examples from one or more trackways. In contrast, stat-tracks refer to other digitally generated tracks which may explore variance. For example, they are useful in: understanding the preservation variability of a given track sample; identifying characteristics or unusual track features; or simply as a quantitative comparison tool. Both concepts assist in making ichnotaxonomical interpretations and we argue that they should become part of the standard procedure when instituting new ichnotaxa. As three-dimensional models start to become a standard in publications on vertebrate ichnology, the mediotype and stat-track concepts have the potential to help guiding a revolution in the study of vertebrate ichnology and ichnotaxonomy. PMID:29340246

  5. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  6. A 3D front tracking method on a CPU/GPU system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  7. Application of 3-D imaging sensor for tracking minipigs in the open field test.

    PubMed

    Kulikov, Victor A; Khotskin, Nikita V; Nikitin, Sergey V; Lankin, Vasily S; Kulikov, Alexander V; Trapezov, Oleg V

    2014-09-30

    The minipig is a promising model in neurobiology and psychopharmacology. However, automated tracking of minipig behavior is still unresolved problem. The study was carried out on white, agouti and black (or spotted) minipiglets (n=108) bred in the Institute of Cytology and Genetics. New method of automated tracking of minipig behavior is based on Microsoft Kinect 3-D image sensor and the 3-D image reconstruction with EthoStudio software. The algorithms of distance run and time in the center evaluation were adapted for 3-D image data and new algorithm of vertical activity quantification was developed. The 3-D imaging system successfully detects white, black, spotted and agouti pigs in the open field test (OFT). No effect of sex or color on horizontal (distance run), vertical activities and time in the center was shown. Agouti pigs explored the arena more intensive than white or black animals, respectively. The OFT behavioral traits were compared with the fear reaction to experimenter. Time in the center of the OFT was positively correlated with fear reaction rank (ρ=0.21, p<0.05). Black pigs were significantly more fearful compared with white or agouti animals. The 3-D imaging system has three advantages over existing automated tracking systems: it avoids perspective distortion, distinguishes animals any color from any background and automatically evaluates vertical activity. The 3-D imaging system can be successfully applied for automated measurement of minipig behavior in neurobiological and psychopharmacological experiments. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. A Real-time 3D Visualization of Global MHD Simulation for Space Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Murata, K.; Matsuoka, D.; Kubo, T.; Shimazu, H.; Tanaka, T.; Fujita, S.; Watari, S.; Miyachi, H.; Yamamoto, K.; Kimura, E.; Ishikura, S.

    2006-12-01

    Recently, many satellites for communication networks and scientific observation are launched in the vicinity of the Earth (geo-space). The electromagnetic (EM) environments around the spacecraft are always influenced by the solar wind blowing from the Sun and induced electromagnetic fields. They occasionally cause various troubles or damages, such as electrification and interference, to the spacecraft. It is important to forecast the geo-space EM environment as well as the ground weather forecasting. Owing to the recent remarkable progresses of super-computer technologies, numerical simulations have become powerful research methods in the solar-terrestrial physics. For the necessity of space weather forecasting, NICT (National Institute of Information and Communications Technology) has developed a real-time global MHD simulation system of solar wind-magnetosphere-ionosphere couplings, which has been performed on a super-computer SX-6. The real-time solar wind parameters from the ACE spacecraft at every one minute are adopted as boundary conditions for the simulation. Simulation results (2-D plots) are updated every 1 minute on a NICT website. However, 3D visualization of simulation results is indispensable to forecast space weather more accurately. In the present study, we develop a real-time 3D webcite for the global MHD simulations. The 3-D visualization results of simulation results are updated every 20 minutes in the following three formats: (1)Streamlines of magnetic field lines, (2)Isosurface of temperature in the magnetosphere and (3)Isoline of conductivity and orthogonal plane of potential in the ionosphere. For the present study, we developed a 3-D viewer application working on Internet Explorer browser (ActiveX) is implemented, which was developed on the AVS/Express. Numerical data are saved in the HDF5 format data files every 1 minute. Users can easily search, retrieve and plot past simulation results (3D visualization data and numerical data) by using

  9. An Improved Method for Real-Time 3D Construction of DTM

    NASA Astrophysics Data System (ADS)

    Wei, Yi

    This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.

  10. MRI - 3D Ultrasound - X-ray Image Fusion with Electromagnetic Tracking for Transendocardial Therapeutic Injections: In-vitro Validation and In-vivo Feasibility

    PubMed Central

    Hatt, Charles R.; Jain, Ameet K.; Parthasarathy, Vijay; Lang, Andrew; Raval, Amish N.

    2014-01-01

    Myocardial infarction (MI) is one of the leading causes of death in the world. Small animal studies have shown that stem-cell therapy offers dramatic functional improvement post-MI. An endomyocardial catheter injection approach to therapeutic agent delivery has been proposed to improve efficacy through increased cell retention. Accurate targeting is critical for reaching areas of greatest therapeutic potential while avoiding a life-threatening myocardial perforation. Multimodal image fusion has been proposed as a way to improve these procedures by augmenting traditional intra-operative imaging modalities with high resolution pre-procedural images. Previous approaches have suffered from a lack of real-time tissue imaging and dependence on X-ray imaging to track devices, leading to increased ionizing radiation dose. In this paper, we present a new image fusion system for catheter-based targeted delivery of therapeutic agents. The system registers real-time 3D echocardiography, magnetic resonance, X-ray, and electromagnetic sensor tracking within a single flexible framework. All system calibrations and registrations were validated and found to have target registration errors less than 5 mm in the worst case. Injection accuracy was validated in a motion enabled cardiac injection phantom, where targeting accuracy ranged from 0.57 to 3.81 mm. Clinical feasibility was demonstrated with in-vivo swine experiments, where injections were successfully made into targeted regions of the heart. PMID:23561056

  11. Direct comparison of cardiac magnetic resonance feature tracking and 2D/3D echocardiography speckle tracking for evaluation of global left ventricular strain.

    PubMed

    Obokata, Masaru; Nagata, Yasufumi; Wu, Victor Chien-Chia; Kado, Yuichiro; Kurabayashi, Masahiko; Otsuji, Yutaka; Takeuchi, Masaaki

    2016-05-01

    Cardiac magnetic resonance (CMR) feature tracking (FT) with steady-state free precession (SSFP) has advantages over traditional myocardial tagging to analyse left ventricular (LV) strain. However, direct comparisons of CMRFT and 2D/3D echocardiography speckle tracking (2/3DEST) for measurement of LV strain are limited. The aim of this study was to investigate the feasibility and reliability of CMRFT and 2D/3DEST for measurement of global LV strain. We enrolled 106 patients who agreed to undergo both CMR and 2D/3DE on the same day. SSFP images at multiple short-axis and three apical views were acquired. 2DE images from three levels of short-axis, three apical views, and 3D full-volume datasets were also acquired. Strain data were expressed as absolute values. Feasibility was highest in CMRFT, followed by 2DEST and 3DEST. Analysis time was shortest in 3DEST, followed by CMRFT and 2DEST. There was good global longitudinal strain (GLS) correlation between CMRFT and 2D/3DEST (r = 0.83 and 0.87, respectively) with the limit of agreement (LOA) ranged from ±3.6 to ±4.9%. Excellent global circumferential strain (GCS) correlation between CMRFT and 2D/3DEST was observed (r = 0.90 and 0.88) with LOA of ±6.8-8.5%. Global radial strain showed fair correlations (r = 0.69 and 0.82, respectively) with LOA ranged from ±12.4 to ±16.3%. CMRFT GCS showed least observer variability with highest intra-class correlation. Although not interchangeable, the high GLS and GCS correlation between CMRFT and 2D/3DEST makes CMRFT a useful modality for quantification of global LV strain in patients, especially those with suboptimal echo image quality. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  12. Defense Additive Manufacturing: DOD Needs to Systematically Track Department-wide 3D Printing Efforts

    DTIC Science & Technology

    2015-10-01

    Clip Additively Manufactured • The Navy installed a 3D printer aboard the USS Essex to demonstrate the ability to additively develop and produce...desired result and vision to have the capability on the fleet. These officials stated that the Navy plans to install 3D printers on two additional...DEFENSE ADDITIVE MANUFACTURING DOD Needs to Systematically Track Department-wide 3D Printing Efforts Report to

  13. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, M.; Ebel, D.S.

    2009-03-19

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length ofmore » {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.« less

  14. Real-Time Interactive Facilities Associated With A 3-D Medical Workstation

    NASA Astrophysics Data System (ADS)

    Goldwasser, S. M.; Reynolds, R. A.; Talton, D.; Walsh, E.

    1986-06-01

    Biomedical workstations of the future will incorporate three-dimensional interactive capabilities which provide real-time response to most common operator requests. Such systems will find application in many areas of medicine including clinical diagnosis, surgical and radiation therapy planning, biomedical research based on functional imaging, and medical education. This paper considers the requirements of these future systems in terms of image quality, performance, and the interactive environment, and examines the relationship of workstation capabilities to specific medical applications. We describe a prototype physician's workstation that we have designed and built to meet many of these requirements (using conventional graphics technology in conjunction with a custom real-time 3-D processor), and give an account of the remaining issues and challenges that future designers of such systems will have to address.

  15. Real-time particle tracking for studying intracellular trafficking of pharmaceutical nanocarriers.

    PubMed

    Huang, Feiran; Watson, Erin; Dempsey, Christopher; Suh, Junghae

    2013-01-01

    Real-time particle tracking is a technique that combines fluorescence microscopy with object tracking and computing and can be used to extract quantitative transport parameters for small particles inside cells. Since the success of a nanocarrier can often be determined by how effectively it delivers cargo to the target organelle, understanding the complex intracellular transport of pharmaceutical nanocarriers is critical. Real-time particle tracking provides insight into the dynamics of the intracellular behavior of nanoparticles, which may lead to significant improvements in the design and development of novel delivery systems. Unfortunately, this technique is not often fully understood, limiting its implementation by researchers in the field of nanomedicine. In this chapter, one of the most complicated aspects of particle tracking, the mean square displacement (MSD) calculation, is explained in a simple manner designed for the novice particle tracker. Pseudo code for performing the MSD calculation in MATLAB is also provided. This chapter contains clear and comprehensive instructions for a series of basic procedures in the technique of particle tracking. Instructions for performing confocal microscopy of nanoparticle samples are provided, and two methods of determining particle trajectories that do not require commercial particle-tracking software are provided. Trajectory analysis and determination of the tracking resolution are also explained. By providing comprehensive instructions needed to perform particle-tracking experiments, this chapter will enable researchers to gain new insight into the intracellular dynamics of nanocarriers, potentially leading to the development of more effective and intelligent therapeutic delivery vectors.

  16. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  17. TH-B-204-01: Real-Time Tracking with Implanted Markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Q.

    Implanted markers as target surrogates have been widely used for treatment verification, as they provide safe and reliable monitoring of the inter- and intra-fractional target motion. The rapid advancement of technology requires a critical review and recommendation for the usage of implanted surrogates in current field. The symposium, also reporting an update of AAPM TG 199 - Implanted Target Surrogates for Radiation Treatment Verification, will be focusing on all clinical aspects of using the implanted target surrogates for treatment verification and related issues. A wide variety of markers available in the market will be first reviewed, including radiopaque markers, MRImore » compatible makers, non-migrating coils, surgical clips and electromagnetic transponders etc. The pros and cons of each kind will be discussed. The clinical applications of implanted surrogates will be presented based on different anatomical sites. For the lung, we will discuss gated treatments and 2D or 3D real-time fiducial tracking techniques. For the prostate, we will be focusing on 2D-3D, 3D-3D matching and electromagnetic transponder based localization techniques. For the liver, we will review techniques when patients are under gating, shallow or free breathing condition. We will review techniques when treating challenging breast cancer as deformation may occur. Finally, we will summarize potential issues related to the usage of implanted target surrogates with TG 199 recommendations. A review of fiducial migration and fiducial derived target rotation in different disease sites will be provided. The issue of target deformation, especially near the diaphragm, and related suggestions will be also presented and discussed. Learning Objectives: Knowledge of a wide variety of markers Knowledge of their application for different disease sites Understand of issues related to these applications Z. Wang: Research funding support from Brainlab AG Q. Xu: Consultant for Accuray; Q. Xu, I am a

  18. Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking

    PubMed Central

    Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong

    2018-01-01

    Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features. PMID:29723974

  19. Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking.

    PubMed

    Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Darwish, Walid; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong

    2018-05-01

    Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features.

  20. UmUTracker: A versatile MATLAB program for automated particle tracking of 2D light microscopy or 3D digital holography data

    NASA Astrophysics Data System (ADS)

    Zhang, Hanqing; Stangner, Tim; Wiklund, Krister; Rodriguez, Alvaro; Andersson, Magnus

    2017-10-01

    We present a versatile and fast MATLAB program (UmUTracker) that automatically detects and tracks particles by analyzing video sequences acquired by either light microscopy or digital in-line holographic microscopy. Our program detects the 2D lateral positions of particles with an algorithm based on the isosceles triangle transform, and reconstructs their 3D axial positions by a fast implementation of the Rayleigh-Sommerfeld model using a radial intensity profile. To validate the accuracy and performance of our program, we first track the 2D position of polystyrene particles using bright field and digital holographic microscopy. Second, we determine the 3D particle position by analyzing synthetic and experimentally acquired holograms. Finally, to highlight the full program features, we profile the microfluidic flow in a 100 μm high flow chamber. This result agrees with computational fluid dynamic simulations. On a regular desktop computer UmUTracker can detect, analyze, and track multiple particles at 5 frames per second for a template size of 201 ×201 in a 1024 × 1024 image. To enhance usability and to make it easy to implement new functions we used object-oriented programming. UmUTracker is suitable for studies related to: particle dynamics, cell localization, colloids and microfluidic flow measurement. Program Files doi : http://dx.doi.org/10.17632/fkprs4s6xp.1 Licensing provisions : Creative Commons by 4.0 (CC by 4.0) Programming language : MATLAB Nature of problem: 3D multi-particle tracking is a common technique in physics, chemistry and biology. However, in terms of accuracy, reliable particle tracking is a challenging task since results depend on sample illumination, particle overlap, motion blur and noise from recording sensors. Additionally, the computational performance is also an issue if, for example, a computationally expensive process is executed, such as axial particle position reconstruction from digital holographic microscopy data. Versatile

  1. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy.

    PubMed

    Menten, Martin J; Fast, Martin F; Nill, Simeon; Oelfke, Uwe

    2015-12-01

    Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated by weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Regular dual-energy imaging was able to increase tracking accuracy in left-right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. This study has highlighted the influence of patient anatomy on the success rate of real-time

  2. Using dual-energy x-ray imaging to enhance automated lung tumor tracking during real-time adaptive radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menten, Martin J., E-mail: martin.menten@icr.ac.uk; Fast, Martin F.; Nill, Simeon

    2015-12-15

    Purpose: Real-time, markerless localization of lung tumors with kV imaging is often inhibited by ribs obscuring the tumor and poor soft-tissue contrast. This study investigates the use of dual-energy imaging, which can generate radiographs with reduced bone visibility, to enhance automated lung tumor tracking for real-time adaptive radiotherapy. Methods: kV images of an anthropomorphic breathing chest phantom were experimentally acquired and radiographs of actual lung cancer patients were Monte-Carlo-simulated at three imaging settings: low-energy (70 kVp, 1.5 mAs), high-energy (140 kVp, 2.5 mAs, 1 mm additional tin filtration), and clinical (120 kVp, 0.25 mAs). Regular dual-energy images were calculated bymore » weighted logarithmic subtraction of high- and low-energy images and filter-free dual-energy images were generated from clinical and low-energy radiographs. The weighting factor to calculate the dual-energy images was determined by means of a novel objective score. The usefulness of dual-energy imaging for real-time tracking with an automated template matching algorithm was investigated. Results: Regular dual-energy imaging was able to increase tracking accuracy in left–right images of the anthropomorphic phantom as well as in 7 out of 24 investigated patient cases. Tracking accuracy remained comparable in three cases and decreased in five cases. Filter-free dual-energy imaging was only able to increase accuracy in 2 out of 24 cases. In four cases no change in accuracy was observed and tracking accuracy worsened in nine cases. In 9 out of 24 cases, it was not possible to define a tracking template due to poor soft-tissue contrast regardless of input images. The mean localization errors using clinical, regular dual-energy, and filter-free dual-energy radiographs were 3.85, 3.32, and 5.24 mm, respectively. Tracking success was dependent on tumor position, tumor size, imaging beam angle, and patient size. Conclusions: This study has highlighted the

  3. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    PubMed Central

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  4. Tracking accuracy of a real-time fiducial tracking system for patient positioning and monitoring in radiation therapy.

    PubMed

    Shchory, Tal; Schifter, Dan; Lichtman, Rinat; Neustadter, David; Corn, Benjamin W

    2010-11-15

    In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive tracking system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Method for dose-reduced 3D catheter tracking on a scanning-beam digital x-ray system using dynamic electronic collimation

    NASA Astrophysics Data System (ADS)

    Dunkerley, David A. P.; Funk, Tobias; Speidel, Michael A.

    2016-03-01

    Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D catheter tracking. This work proposes a method of dose-reduced 3D tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. Positions in the 2D focal spot array are selectively activated to create a regionof- interest (ROI) x-ray field around the tracked catheter. The ROI position is updated for each frame based on a motion vector calculated from the two most recent 3D tracking results. The technique was evaluated with SBDX data acquired as a catheter tip inside a chest phantom was pulled along a 3D trajectory. DEC scans were retrospectively generated from the detector images stored for each focal spot position. DEC imaging of a catheter tip in a volume measuring 11.4 cm across at isocenter required 340 active focal spots per frame, versus 4473 spots in full-FOV mode. The dose-area-product (DAP) and peak skin dose (PSD) for DEC versus full field-of-view (FOV) scanning were calculated using an SBDX Monte Carlo simulation code. DAP was reduced to 7.4% to 8.4% of the full-FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full-FOV value. The root-mean-squared-deviation between DEC-based 3D tracking coordinates and full-FOV 3D tracking coordinates was less than 0.1 mm. The 3D distance between the tracked tip and the sheath centerline averaged 0.75 mm. Dynamic electronic collimation can reduce dose with minimal change in tracking performance.

  6. An MR-compatible stereoscopic in-room 3D display for MR-guided interventions.

    PubMed

    Brunner, Alexander; Groebner, Jens; Umathum, Reiner; Maier, Florian; Semmler, Wolfhard; Bock, Michael

    2014-08-01

    A commercial three-dimensional (3D) monitor was modified for use inside the scanner room to provide stereoscopic real-time visualization during magnetic resonance (MR)-guided interventions, and tested in a catheter-tracking phantom experiment at 1.5 T. Brightness, uniformity, radio frequency (RF) emissions and MR image interferences were measured. Due to modifications, the center luminance of the 3D monitor was reduced by 14%, and the addition of a Faraday shield further reduced the remaining luminance by 31%. RF emissions could be effectively shielded; only a minor signal-to-noise ratio (SNR) decrease of 4.6% was observed during imaging. During the tracking experiment, the 3D orientation of the catheter and vessel structures in the phantom could be visualized stereoscopically.

  7. Usability of a real-time tracked augmented reality display system in musculoskeletal injections

    NASA Astrophysics Data System (ADS)

    Baum, Zachary; Ungi, Tamas; Lasso, Andras; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Image-guided needle interventions are seldom performed with augmented reality guidance in clinical practice due to many workspace and usability restrictions. We propose a real-time optically tracked image overlay system to make image-guided musculoskeletal injections more efficient and assess its usability in a bed-side clinical environment. METHODS: An image overlay system consisting of an optically tracked viewbox, tablet computer, and semitransparent mirror allows users to navigate scanned patient volumetric images in real-time using software built on the open-source 3D Slicer application platform. A series of experiments were conducted to evaluate the latency and screen refresh rate of the system using different image resolutions. To assess the usability of the system and software, five medical professionals were asked to navigate patient images while using the overlay and completed a questionnaire to assess the system. RESULTS: In assessing the latency of the system with scanned images of varying size, screen refresh rates were approximately 5 FPS. The study showed that participants found using the image overlay system easy, and found the table-mounted system was significantly more usable and effective than the handheld system. CONCLUSION: It was determined that the system performs comparably with scanned images of varying size when assessing the latency of the system. During our usability study, participants preferred the table-mounted system over the handheld. The participants also felt that the system itself was simple to use and understand. With these results, the image overlay system shows promise for use in a clinical environment.

  8. Real-Time Gaze Tracking for Public Displays

    NASA Astrophysics Data System (ADS)

    Sippl, Andreas; Holzmann, Clemens; Zachhuber, Doris; Ferscha, Alois

    In this paper, we explore the real-time tracking of human gazes in front of large public displays. The aim of our work is to estimate at which area of a display one ore more people are looking at a time, independently from the distance and angle to the display as well as the height of the tracked people. Gaze tracking is relevant for a variety of purposes, including the automatic recognition of the user's focus of attention, or the control of interactive applications with gaze gestures. The scope of the present paper is on the former, and we show how gaze tracking can be used for implicit interaction in the pervasive advertising domain. We have developed a prototype for this purpose, which (i) uses an overhead mounted camera to distinguish four gaze areas on a large display, (ii) works for a wide range of positions in front of the display, and (iii) provides an estimation of the currently gazed quarters in real time. A detailed description of the prototype as well as the results of a user study with 12 participants, which show the recognition accuracy for different positions in front of the display, are presented.

  9. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  10. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  11. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. SU-G-JeP1-11: Feasibility Study of Markerless Tracking Using Dual Energy Fluoroscopic Images for Real-Time Tumor-Tracking Radiotherapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiinoki, T; Shibuya, K; Sawada, A

    Purpose: The new real-time tumor-tracking radiotherapy (RTRT) system was installed in our institution. This system consists of two x-ray tubes and color image intensifiers (I.I.s). The fiducial marker which was implanted near the tumor was tracked using color fluoroscopic images. However, the implantation of the fiducial marker is very invasive. Color fluoroscopic images enable to increase the recognition of the tumor. However, these images were not suitable to track the tumor without fiducial marker. The purpose of this study was to investigate the feasibility of markerless tracking using dual energy colored fluoroscopic images for real-time tumor-tracking radiotherapy system. Methods: Themore » colored fluoroscopic images of static and moving phantom that had the simulated tumor (30 mm diameter sphere) were experimentally acquired using the RTRT system. The programmable respiratory motion phantom was driven using the sinusoidal pattern in cranio-caudal direction (Amplitude: 20 mm, Time: 4 s). The x-ray condition was set to 55 kV, 50 mA and 105 kV, 50 mA for low energy and high energy, respectively. Dual energy images were calculated based on the weighted logarithmic subtraction of high and low energy images of RGB images. The usefulness of dual energy imaging for real-time tracking with an automated template image matching algorithm was investigated. Results: Our proposed dual energy subtraction improve the contrast between tumor and background to suppress the bone structure. For static phantom, our results showed that high tracking accuracy using dual energy subtraction images. For moving phantom, our results showed that good tracking accuracy using dual energy subtraction images. However, tracking accuracy was dependent on tumor position, tumor size and x-ray conditions. Conclusion: We indicated that feasibility of markerless tracking using dual energy fluoroscopic images for real-time tumor-tracking radiotherapy system. Furthermore, it is needed to

  13. Rapid, topology-based particle tracking for high-resolution measurements of large complex 3D motion fields.

    PubMed

    Patel, Mohak; Leggett, Susan E; Landauer, Alexander K; Wong, Ian Y; Franck, Christian

    2018-04-03

    Spatiotemporal tracking of tracer particles or objects of interest can reveal localized behaviors in biological and physical systems. However, existing tracking algorithms are most effective for relatively low numbers of particles that undergo displacements smaller than their typical interparticle separation distance. Here, we demonstrate a single particle tracking algorithm to reconstruct large complex motion fields with large particle numbers, orders of magnitude larger than previously tractably resolvable, thus opening the door for attaining very high Nyquist spatial frequency motion recovery in the images. Our key innovations are feature vectors that encode nearest neighbor positions, a rigorous outlier removal scheme, and an iterative deformation warping scheme. We test this technique for its accuracy and computational efficacy using synthetically and experimentally generated 3D particle images, including non-affine deformation fields in soft materials, complex fluid flows, and cell-generated deformations. We augment this algorithm with additional particle information (e.g., color, size, or shape) to further enhance tracking accuracy for high gradient and large displacement fields. These applications demonstrate that this versatile technique can rapidly track unprecedented numbers of particles to resolve large and complex motion fields in 2D and 3D images, particularly when spatial correlations exist.

  14. A low-cost test-bed for real-time landmark tracking

    NASA Astrophysics Data System (ADS)

    Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher

    2007-04-01

    A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.

  15. Calibration of 3D ultrasound to an electromagnetic tracking system

    NASA Astrophysics Data System (ADS)

    Lang, Andrew; Parthasarathy, Vijay; Jain, Ameet

    2011-03-01

    The use of electromagnetic (EM) tracking is an important guidance tool that can be used to aid procedures requiring accurate localization such as needle injections or catheter guidance. Using EM tracking, the information from different modalities can be easily combined using pre-procedural calibration information. These calibrations are performed individually, per modality, allowing different imaging systems to be mixed and matched according to the procedure at hand. In this work, a framework for the calibration of a 3D transesophageal echocardiography probe to EM tracking is developed. The complete calibration framework includes three required steps: data acquisition, needle segmentation, and calibration. Ultrasound (US) images of an EM tracked needle must be acquired with the position of the needles in each volume subsequently extracted by segmentation. The calibration transformation is determined through a registration between the segmented points and the recorded EM needle positions. Additionally, the speed of sound is compensated for since calibration is performed in water that has a different speed then is assumed by the US machine. A statistical validation framework has also been developed to provide further information related to the accuracy and consistency of the calibration. Further validation of the calibration showed an accuracy of 1.39 mm.

  16. Achieving Real-Time Tracking Mobile Wireless Sensors Using SE-KFA

    NASA Astrophysics Data System (ADS)

    Kadhim Hoomod, Haider, Dr.; Al-Chalabi, Sadeem Marouf M.

    2018-05-01

    Nowadays, Real-Time Achievement is very important in different fields, like: Auto transport control, some medical applications, celestial body tracking, controlling agent movements, detections and monitoring, etc. This can be tested by different kinds of detection devices, which named "sensors" as such as: infrared sensors, ultrasonic sensor, radars in general, laser light sensor, and so like. Ultrasonic Sensor is the most fundamental one and it has great impact and challenges comparing with others especially when navigating (as an agent). In this paper, concerning to the ultrasonic sensor, sensor(s) detecting and delimitation by themselves then navigate inside a limited area to estimating Real-Time using Speed Equation with Kalman Filter Algorithm as an intelligent estimation algorithm. Then trying to calculate the error comparing to the factual rate of tracking. This paper used Ultrasonic Sensor HC-SR04 with Arduino-UNO as Microcontroller.

  17. 3D Parallel Multigrid Methods for Real-Time Fluid Simulation

    NASA Astrophysics Data System (ADS)

    Wan, Feifei; Yin, Yong; Zhang, Suiyu

    2018-03-01

    The multigrid method is widely used in fluid simulation because of its strong convergence. In addition to operating accuracy, operational efficiency is also an important factor to consider in order to enable real-time fluid simulation in computer graphics. For this problem, we compared the performance of the Algebraic Multigrid and the Geometric Multigrid in the V-Cycle and Full-Cycle schemes respectively, and analyze the convergence and speed of different methods. All the calculations are done on the parallel computing of GPU in this paper. Finally, we experiment with the 3D-grid for each scale, and give the exact experimental results.

  18. Tracking Accuracy of a Real-Time Fiducial Tracking System for Patient Positioning and Monitoring in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shchory, Tal; Schifter, Dan; Lichtman, Rinat

    Purpose: In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. Methods and Materials: The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive trackingmore » system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. Results: The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. Conclusions: This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy.« less

  19. Dual lumen transducer probes for real-time 3-D interventional cardiac ultrasound.

    PubMed

    Lee, Warren; Idriss, Salim F; Wolf, Patrick D; Smith, Stephen W

    2003-09-01

    We have developed dual lumen probes incorporating a forward-viewing matrix array transducer with an integrated working lumen for delivery of tools in real-time 3-D (RT3-D) interventional echocardiography. The probes are of 14 Fr and 22 Fr sizes, with 112 channel 2-D arrays operating at 5 MHz. We obtained images of cardiac anatomy and simultaneous interventional device delivery with an in vivo sheep model, including: manipulation of a 0.36-mm diameter guidewire into the coronary sinus, guidance of a transseptal puncture using a 1.2-mm diameter Brockenbrough needle, and guidance of a right ventricular biopsy using 3 Fr biopsy forceps. We have also incorporated the 22 Fr probe within a 6-mm surgical trocar to obtain apical four-chamber ultrasound (US) scans from a subcostal position. Combining the imaging catheter with a working lumen in a single device may simplify cardiac interventional procedures by allowing clinicians to easily visualize cardiac structures and simultaneously direct interventional tools in a RT3-D image.

  20. 3D Tracking of Mating Events in Wild Swarms of the Malaria Mosquito Anopheles gambiae

    PubMed Central

    Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Yaro, Alpha S.; Dao, Adama; Traoré, Sekou F.; Ribeiro, José M.; Lehmann, Tovi; Paley, Derek A.

    2013-01-01

    We describe an automated tracking system that allows us to reconstruct the 3D kinematics of individual mosquitoes in swarms of Anopheles gambiae. The inputs to the tracking system are video streams recorded from a stereo camera system. The tracker uses a two-pass procedure to automatically localize and track mosquitoes within the swarm. A human-in-the-loop step verifies the estimates and connects broken tracks. The tracker performance is illustrated using footage of mating events filmed in Mali in August 2010. PMID:22254411

  1. [A review of progress of real-time tumor tracking radiotherapy technology based on dynamic multi-leaf collimator].

    PubMed

    Liu, Fubo; Li, Guangjun; Shen, Jiuling; Li, Ligin; Bai, Sen

    2017-02-01

    While radiation treatment to patients with tumors in thorax and abdomen is being performed, further improvement of radiation accuracy is restricted by the tumor intra-fractional motion due to respiration. Real-time tumor tracking radiation is an optimal solution to tumor intra-fractional motion. A review of the progress of real-time dynamic multi-leaf collimator(DMLC) tracking is provided in the present review, including DMLC tracking method, time lag of DMLC tracking system, and dosimetric verification.

  2. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  3. Real-time auto-adaptive margin generation for MLC-tracked radiotherapy

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; Fast, M. F.; de Senneville, B. Denis; Nill, S.; Oelfke, U.; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2017-01-01

    In radiotherapy, abdominal and thoracic sites are candidates for performing motion tracking. With real-time control it is possible to adjust the multileaf collimator (MLC) position to the target position. However, positions are not perfectly matched and position errors arise from system delays and complicated response of the electromechanic MLC system. Although, it is possible to compensate parts of these errors by using predictors, residual errors remain and need to be compensated to retain target coverage. This work presents a method to statistically describe tracking errors and to automatically derive a patient-specific, per-segment margin to compensate the arising underdosage on-line, i.e. during plan delivery. The statistics of the geometric error between intended and actual machine position are derived using kernel density estimators. Subsequently a margin is calculated on-line according to a selected coverage parameter, which determines the amount of accepted underdosage. The margin is then applied onto the actual segment to accommodate the positioning errors in the enlarged segment. The proof-of-concept was tested in an on-line tracking experiment and showed the ability to recover underdosages for two test cases, increasing {{V}90 %} in the underdosed area about 47 % and 41 % , respectively. The used dose model was able to predict the loss of dose due to tracking errors and could be used to infer the necessary margins. The implementation had a running time of 23 ms which is compatible with real-time requirements of MLC tracking systems. The auto-adaptivity to machine and patient characteristics makes the technique a generic yet intuitive candidate to avoid underdosages due to MLC tracking errors.

  4. Real-time seam tracking control system based on line laser visions

    NASA Astrophysics Data System (ADS)

    Zou, Yanbiao; Wang, Yanbo; Zhou, Weilin; Chen, Xiangzhi

    2018-07-01

    A set of six-degree-of-freedom robotic welding automatic tracking platform was designed in this study to realize the real-time tracking of weld seams. Moreover, the feature point tracking method and the adaptive fuzzy control algorithm in the welding process were studied and analyzed. A laser vision sensor and its measuring principle were designed and studied, respectively. Before welding, the initial coordinate values of the feature points were obtained using morphological methods. After welding, the target tracking method based on Gaussian kernel was used to extract the real-time feature points of the weld. An adaptive fuzzy controller was designed to input the deviation value of the feature points and the change rate of the deviation into the controller. The quantization factors, scale factor, and weight function were adjusted in real time. The input and output domains, fuzzy rules, and membership functions were constantly updated to generate a series of smooth bias robot voltage. Three groups of experiments were conducted on different types of curve welds in a strong arc and splash noise environment using the welding current of 120 A short-circuit Metal Active Gas (MAG) Arc Welding. The tracking error was less than 0.32 mm and the sensor's metrical frequency can be up to 20 Hz. The end of the torch run smooth during welding. Weld trajectory can be tracked accurately, thereby satisfying the requirements of welding applications.

  5. Holographic microscopy for 3D tracking of bacteria

    NASA Astrophysics Data System (ADS)

    Nadeau, Jay; Cho, Yong Bin; El-Kholy, Marwan; Bedrossian, Manuel; Rider, Stephanie; Lindensmith, Christian; Wallace, J. Kent

    2016-03-01

    Understanding when, how, and if bacteria swim is key to understanding critical ecological and biological processes, from carbon cycling to infection. Imaging motility by traditional light microscopy is limited by focus depth, requiring cells to be constrained in z. Holographic microscopy offers an instantaneous 3D snapshot of a large sample volume, and is therefore ideal in principle for quantifying unconstrained bacterial motility. However, resolving and tracking individual cells is difficult due to the low amplitude and phase contrast of the cells; the index of refraction of typical bacteria differs from that of water only at the second decimal place. In this work we present a combination of optical and sample-handling approaches to facilitating bacterial tracking by holographic phase imaging. The first is the design of the microscope, which is an off-axis design with the optics along a common path, which minimizes alignment issues while providing all of the advantages of off-axis holography. Second, we use anti-reflective coated etalon glass in the design of sample chambers, which reduce internal reflections. Improvement seen with the antireflective coating is seen primarily in phase imaging, and its quantification is presented here. Finally, dyes may be used to increase phase contrast according to the Kramers-Kronig relations. Results using three test strains are presented, illustrating the different types of bacterial motility characterized by an enteric organism (Escherichia coli), an environmental organism (Bacillus subtilis), and a marine organism (Vibrio alginolyticus). Data processing steps to increase the quality of the phase images and facilitate tracking are also discussed.

  6. Esophagogastric Junction pressure morphology: comparison between a station pull-through and real-time 3D-HRM representation.

    PubMed

    Nicodème, F; Lin, Z; Pandolfino, J E; Kahrilas, P J

    2013-09-01

    Esophagogastric junction (EGJ) competence is the fundamental defense against reflux making it of great clinical significance. However, characterizing EGJ competence with conventional manometric methodologies has been confounded by its anatomic and physiological complexity. Recent technological advances in miniaturization and electronics have led to the development of a novel device that may overcome these challenges. Nine volunteer subjects were studied with a novel 3D-HRM device providing 7.5 mm axial and 45° radial pressure resolution within the EGJ. Real-time measurements were made at rest and compared to simulations of a conventional pull-through made with the same device. Moreover, 3D-HRM recordings were analyzed to differentiate contributing pressure signals within the EGJ attributable to lower esophageal sphincter (LES), diaphragm, and vasculature. 3D-HRM recordings suggested that sphincter length assessed by a pull-through method greatly exaggerated the estimate of LES length by failing to discriminate among circumferential contractile pressure and asymmetric extrinsic pressure signals attributable to diaphragmatic and vascular structures. Real-time 3D EGJ recordings found that the dominant constituents of EGJ pressure at rest were attributable to the diaphragm. 3D-HRM permits real-time recording of EGJ pressure morphology facilitating analysis of the EGJ constituents responsible for its function as a reflux barrier making it a promising tool in the study of GERD pathophysiology. The enhanced axial and radial recording resolution of the device should facilitate further studies to explore perturbations in the physiological constituents of EGJ pressure in health and disease. © 2013 John Wiley & Sons Ltd.

  7. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  8. TH-AB-202-02: Real-Time Verification and Error Detection for MLC Tracking Deliveries Using An Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J Zwan, B; Central Coast Cancer Centre, Gosford, NSW; Colvill, E

    2016-06-15

    Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3)more » field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm{sup 2} (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.« less

  9. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  10. Real-time catheter tracking for high-dose-rate prostate brachytherapy using an electromagnetic 3D-guidance device: A preliminary performance study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jun; Sebastian, Evelyn; Mangona, Victor

    2013-02-15

    Purpose: In order to increase the accuracy and speed of catheter reconstruction in a high-dose-rate (HDR) prostate implant procedure, an automatic tracking system has been developed using an electromagnetic (EM) device (trakSTAR, Ascension Technology, VT). The performance of the system, including the accuracy and noise level with various tracking parameters and conditions, were investigated. Methods: A direct current (dc) EM transmitter (midrange model) and a sensor with diameter of 1.3 mm (Model 130) were used in the trakSTAR system for tracking catheter position during HDR prostate brachytherapy. Localization accuracy was assessed under both static and dynamic analyses conditions. For themore » static analysis, a calibration phantom was used to investigate error dependency on operating room (OR) table height (bottom vs midposition vs top), sensor position (distal tip of catheter vs connector end of catheter), direction [left-right (LR) vs anterior-posterior (AP) vs superior-inferior (SI)], sampling frequency (40 vs 80 vs 120 Hz), and interference from OR equipment (present vs absent). The mean and standard deviation of the localization offset in each direction and the corresponding error vectors were calculated. For dynamic analysis, the paths of five straight catheters were tracked to study the effects of directions, sampling frequency, and interference of EM field. Statistical analysis was conducted to compare the results in different configurations. Results: When interference was present in the static analysis, the error vectors were significantly higher at the top table position (3.3 {+-} 1.3 vs 1.8 {+-} 0.9 mm at bottom and 1.7 {+-} 1.0 mm at middle, p < 0.001), at catheter end position (3.1 {+-} 1.1 vs 1.4 {+-} 0.7 mm at the tip position, p < 0.001), and at 40 Hz sampling frequency (2.6 {+-} 1.1 vs 2.4 {+-} 1.5 mm at 80 Hz and 1.8 {+-} 1.1 at 160 Hz, p < 0.001). So did the mean offset errors in the LR direction (-1.7 {+-} 1.4 vs 0.4 {+-} 0.5 mm in AP and

  11. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang

    2013-01-15

    /pixel). The standard deviations of the results from the 6 researchers are 2.3 and 2.6 pixels. The proposed framework takes about 128 ms to detect four markers in the first MV images and about 23 ms to track these markers in each of the subsequent images. Conclusions: The unified framework for tracking of multiple markers presented here can achieve marker detection accuracy similar to manual detection even in low-contrast cine-MV images. It can cope with shape deformations of fiducial markers at different gantry angles. The fast processing speed reduces the image processing portion of the system latency, therefore can improve the performance of real-time motion compensation.« less

  12. Drogue tracking using 3D flash lidar for autonomous aerial refueling

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Stettner, Roger

    2011-06-01

    Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.

  13. 4D Optimization of Scanned Ion Beam Tracking Therapy for Moving Tumors

    PubMed Central

    Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph

    2014-01-01

    Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking. PMID:24889215

  14. 4D optimization of scanned ion beam tracking therapy for moving tumors

    NASA Astrophysics Data System (ADS)

    Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph

    2014-07-01

    Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking.

  15. WE-G-213CD-06: Implementation of Real-Time Tumor Tracking Using Robotic Couch.

    PubMed

    Buzurovic, I; Yu, Y; Podder, T

    2012-06-01

    The purpose of this study was to present a novel method for real- time tumor tracking using a commercially available robotic treatment couch, and to evaluate tumor tracking accuracy. Commercially available robotic couches are capable of positioning patients with high level of accuracy; however, currently there is no provision for compensating tumor motion using these systems. Elekta's existing commercial couch (PreciseTM Table) was used without changing its design. To establish the real-time couch motion for tracking, a novel control system was developed and implemented. The tabletop could be moved in horizontal plane (laterally and longitudinally) using two Maxon-24V motors with gearbox combination. Vertical motion was obtained using robust 70V-Rockwell Automation motor. For vertical motor position sensing, we used Model 755A-Accu- Coder encoder. Two Baumer-ITD_01_4mm shaft encoders were used for the lateral and longitudinal motions of the couch. Motors were connected to the Advance Motion Controls (AMC) amplifiers: for the vertical motion, motor AMC-20A20-INV amplifier was used, and two AMC-Z6A8 amplifiers were applied for the lateral and longitudinal couch motions. The Galil DMC-4133 controller was connected to standard PC computer using USB port. The system had two independent power supplies: Galil PSR-12- 24-12A, 24vdc power supply with diodes for controller and 24vdc motors and amplifiers, and Galil-PS300W72 72vdc power supply for vertical motion. Control algorithms were developed for position and velocity adjustment. The system was tested for real-time tracking in the range of 50mm in all 3 directions (superior-inferior, lateral, anterior- posterior). Accuracies were 0.15, 0.20, and 0.18mm, respectively. Repeatability of the desired motion was within ± 0.2mm. Experimental results of couch tracking show feasibility of real-time tumor tracking with high level of accuracy (within sub-millimeter range). This tracking technique potentially offers a simple and

  16. Direct cortical control of 3D neuroprosthetic devices.

    PubMed

    Taylor, Dawn M; Tillery, Stephen I Helms; Schwartz, Andrew B

    2002-06-07

    Three-dimensional (3D) movement of neuroprosthetic devices can be controlled by the activity of cortical neurons when appropriate algorithms are used to decode intended movement in real time. Previous studies assumed that neurons maintain fixed tuning properties, and the studies used subjects who were unaware of the movements predicted by their recorded units. In this study, subjects had real-time visual feedback of their brain-controlled trajectories. Cell tuning properties changed when used for brain-controlled movements. By using control algorithms that track these changes, subjects made long sequences of 3D movements using far fewer cortical units than expected. Daily practice improved movement accuracy and the directional tuning of these units.

  17. Quantification of Left Ventricular Linear, Areal and Volumetric Dimensions: A Phantom and in Vivo Comparison of 2-D and Real-Time 3-D Echocardiography with Cardiovascular Magnetic Resonance.

    PubMed

    Polte, Christian L; Lagerstrand, Kerstin M; Gao, Sinsia A; Lamm, Carl R; Bech-Hanssen, Odd

    2015-07-01

    Two-dimensional echocardiography and real-time 3-D echocardiography have been reported to underestimate human left ventricular volumes significantly compared with cardiovascular magnetic resonance. We investigated the ability of 2-D echocardiography, real-time 3-D echocardiography and cardiovascular magnetic resonance to delineate dimensions of increasing complexity (diameter-area-volume) in a multimodality phantom model and in vivo, with the aim of elucidating the main cause of underestimation. All modalities were able to delineate phantom dimensions with high precision. In vivo, 2-D and real-time 3-D echocardiography underestimated short-axis end-diastolic linear and areal and all left ventricular volumetric dimensions significantly compared with cardiovascular magnetic resonance, but not short-axis end-systolic linear and areal dimensions. Underestimation increased successively from linear to volumetric left ventricular dimensions. When analyzed according to the same principles, 2-D and real-time 3-DE echocardiography provided similar left ventricular volumes. In conclusion, echocardiographic underestimation of left ventricular dimensions is due mainly to inherent technical differences in the ability to differentiate trabeculated from compact myocardium. Identical endocardial border definition criteria are needed to minimize differences between the modalities and to ensure better comparability in clinical practice. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  18. Real-time self-calibration of a tracked augmented reality display

    NASA Astrophysics Data System (ADS)

    Baum, Zachary; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2016-03-01

    PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system. METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit. RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections. CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.

  19. Comparison of a GPS needle-tracking system, multiplanar imaging and 2D imaging for real-time ultrasound-guided epidural anaesthesia: A randomized, comparative, observer-blinded study on phantoms.

    PubMed

    Menacé, Cécilia; Choquet, Olivier; Abbal, Bertrand; Bringuier, Sophie; Capdevila, Xavier

    2017-04-01

    The real-time ultrasound-guided paramedian sagittal oblique approach for neuraxial blockade is technically demanding. Innovative technologies have been developed to improve nerve identification and the accuracy of needle placement. The aim of this study was to evaluate three types of ultrasound scans during ultrasound-guided epidural lumbar punctures in a spine phantom. Eleven sets of 20 ultrasound-guided epidural punctures were performed with 2D, GPS, and multiplanar ultrasound machines (660 punctures) on a spine phantom using an in-plane approach. For all punctures, execution time, number of attempts, bone contacts, and needle redirections were noted by an independent physician. Operator comfort and visibility of the needle (tip and shaft) were measured using a numerical scale. The use of GPS significantly decreased the number of punctures, needle repositionings, and bone contacts. Comfort of the physician was also significantly improved with the GPS system compared with the 2D and multiplanar systems. With the multiplanar system, the procedure was not facilitated and execution time was longer compared with 2D imaging after Bonferroni correction but interaction between the type of ultrasound system and mean execution time was not significant in a linear mixed model. There were no significant differences regarding needle tip and shaft visibility between the systems. Multiplanar and GPS needle-tracking systems do not reduce execution time compared with 2D imaging using a real-time ultrasound-guided paramedian sagittal oblique approach in spine phantoms. The GPS needle-tracking system can improve performance in terms of operator comfort, the number of attempts, needle redirections and bone contacts. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  20. Multiple particle tracking in 3-D+t microscopy: method and application to the tracking of endocytosed quantum dots.

    PubMed

    Genovesio, Auguste; Liedl, Tim; Emiliani, Valentina; Parak, Wolfgang J; Coppey-Moisan, Maité; Olivo-Marin, Jean-Christophe

    2006-05-01

    We propose a method to detect and track multiple moving biological spot-like particles showing different kinds of dynamics in image sequences acquired through multidimensional fluorescence microscopy. It enables the extraction and analysis of information such as number, position, speed, movement, and diffusion phases of, e.g., endosomal particles. The method consists of several stages. After a detection stage performed by a three-dimensional (3-D) undecimated wavelet transform, we compute, for each detected spot, several predictions of its future state in the next frame. This is accomplished thanks to an interacting multiple model (IMM) algorithm which includes several models corresponding to different biologically realistic movement types. Tracks are constructed, thereafter, by a data association algorithm based on the maximization of the likelihood of each IMM. The last stage consists of updating the IMM filters in order to compute final estimations for the present image and to improve predictions for the next image. The performances of the method are validated on synthetic image data and used to characterize the 3-D movement of endocytic vesicles containing quantum dots.

  1. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.

  2. Real-time 3D human capture system for mixed-reality art and entertainment.

    PubMed

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  3. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  4. Real-time eye tracking for the assessment of driver fatigue.

    PubMed

    Xu, Junli; Min, Jianliang; Hu, Jianfeng

    2018-04-01

    Eye-tracking is an important approach to collect evidence regarding some participants' driving fatigue. In this contribution, the authors present a non-intrusive system for evaluating driver fatigue by tracking eye movement behaviours. A real-time eye-tracker was used to monitor participants' eye state for collecting eye-movement data. These data are useful to get insights into assessing participants' fatigue state during monotonous driving. Ten healthy subjects performed continuous simulated driving for 1-2 h with eye state monitoring on a driving simulator in this study, and these measured features of the fixation time and the pupil area were recorded via using eye movement tracking device. For achieving a good cost-performance ratio and fast computation time, the fuzzy K -nearest neighbour was employed to evaluate and analyse the influence of different participants on the variations in the fixation duration and pupil area of drivers. The findings of this study indicated that there are significant differences in domain value distribution of the pupil area under the condition with normal and fatigue driving state. Result also suggests that the recognition accuracy by jackknife validation reaches to about 89% in average, implying that show a significant potential of real-time applicability of the proposed approach and is capable of detecting driver fatigue.

  5. Real-time eye tracking for the assessment of driver fatigue

    PubMed Central

    Xu, Junli; Min, Jianliang

    2018-01-01

    Eye-tracking is an important approach to collect evidence regarding some participants’ driving fatigue. In this contribution, the authors present a non-intrusive system for evaluating driver fatigue by tracking eye movement behaviours. A real-time eye-tracker was used to monitor participants’ eye state for collecting eye-movement data. These data are useful to get insights into assessing participants’ fatigue state during monotonous driving. Ten healthy subjects performed continuous simulated driving for 1–2 h with eye state monitoring on a driving simulator in this study, and these measured features of the fixation time and the pupil area were recorded via using eye movement tracking device. For achieving a good cost-performance ratio and fast computation time, the fuzzy K-nearest neighbour was employed to evaluate and analyse the influence of different participants on the variations in the fixation duration and pupil area of drivers. The findings of this study indicated that there are significant differences in domain value distribution of the pupil area under the condition with normal and fatigue driving state. Result also suggests that the recognition accuracy by jackknife validation reaches to about 89% in average, implying that show a significant potential of real-time applicability of the proposed approach and is capable of detecting driver fatigue. PMID:29750113

  6. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    PubMed

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/.

  7. Real-time active MR-tracking of metallic stylets in MR-guided radiation therapy

    PubMed Central

    Wang, Wei; Dumoulin, Charles L.; Viswanathan, Akila N.; Tse, Zion T. H.; Mehrtash, Alireza; Loew, Wolfgang; Norton, Isaiah; Tokuda, Junichi; Seethamraju, Ravi T.; Kapur, Tina; Damato, Antonio L.; Cormack, Robert A.; Schmidt, Ehud J.

    2014-01-01

    Purpose To develop an active MR-tracking system to guide placement of metallic devices for radiation therapy. Methods An actively tracked metallic stylet for brachytherapy was constructed by adding printed-circuit micro-coils to a commercial stylet. The coil design was optimized by electromagnetic simulation, and has a radio-frequency lobe pattern extending ~5 mm beyond the strong B0 inhomogeneity region near the metal surface. An MR-tracking sequence with phase-field dithering was used to overcome residual effects of B0 and B1 inhomogeneities caused by the metal, as well as from inductive coupling to surrounding metallic stylets. The tracking system was integrated with a graphical workstation for real-time visualization. 3T MRI catheter-insertion procedures were tested in phantoms and ex-vivo animal tissue, and then performed in three patients during interstitial brachytherapy. Results The tracking system provided high-resolution (0.6 × 0.6 × 0.6 mm3) and rapid (16 to 40 frames per second, with three to one phase-field dithering directions) catheter localization in phantoms, animals, and three gynecologic cancer patients. Conclusion This is the first demonstration of active tracking of the shaft of metallic stylet in MR-guided brachytherapy. It holds the promise of assisting physicians to achieve better targeting and improving outcomes in interstitial brachytherapy. PMID:24903165

  8. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography

    PubMed Central

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J.; French, Paul M. W.; McGinty, James

    2015-01-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound. PMID:25909009

  9. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography.

    PubMed

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J; French, Paul M W; McGinty, James

    2015-04-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound.

  10. Real-time simulation of thermal shadows with EMIT

    NASA Astrophysics Data System (ADS)

    Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul

    2016-05-01

    Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.

  11. Techniques for 3D tracking of single molecules with nanometer accuracy in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, Lucia; Capitanio, Marco; Pavone, Francesco S.

    2013-06-01

    We describe a microscopy technique that, combining wide-field single molecule microscopy, bifocal imaging and Highly Inclined and Laminated Optical sheet (HILO) microscopy, allows a 3D tracking with nanometer accuracy of single fluorescent molecules in vitro and in living cells.

  12. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  13. Four-dimensional dose distributions of step-and-shoot IMRT delivered with real-time tumor tracking for patients with irregular breathing: Constant dose rate vs dose rate regulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Xiaocheng; Han-Oh, Sarah; Gui Minzhi

    2012-09-15

    Purpose: Dose-rate-regulated tracking (DRRT) is a tumor tracking strategy that programs the MLC to track the tumor under regular breathing and adapts to breathing irregularities during delivery using dose rate regulation. Constant-dose-rate tracking (CDRT) is a strategy that dynamically repositions the beam to account for intrafractional 3D target motion according to real-time information of target location obtained from an independent position monitoring system. The purpose of this study is to illustrate the differences in the effectiveness and delivery accuracy between these two tracking methods in the presence of breathing irregularities. Methods: Step-and-shoot IMRT plans optimized at a reference phase weremore » extended to remaining phases to generate 10-phased 4D-IMRT plans using segment aperture morphing (SAM) algorithm, where both tumor displacement and deformation were considered. A SAM-based 4D plan has been demonstrated to provide better plan quality than plans not considering target deformation. However, delivering such a plan requires preprogramming of the MLC aperture sequence. Deliveries of the 4D plans using DRRT and CDRT tracking approaches were simulated assuming the breathing period is either shorter or longer than the planning day, for 4 IMRT cases: two lung and two pancreatic cases with maximum GTV centroid motion greater than 1 cm were selected. In DRRT, dose rate was regulated to speed up or slow down delivery as needed such that each planned segment is delivered at the planned breathing phase. In CDRT, MLC is separately controlled to follow the tumor motion, but dose rate was kept constant. In addition to breathing period change, effect of breathing amplitude variation on target and critical tissue dose distribution is also evaluated. Results: Delivery of preprogrammed 4D plans by the CDRT method resulted in an average of 5% increase in target dose and noticeable increase in organs at risk (OAR) dose when patient breathing is either 10

  14. Eulerian and Lagrangian methods for vortex tracking in 2D and 3D flows

    NASA Astrophysics Data System (ADS)

    Huang, Yangzi; Green, Melissa

    2014-11-01

    Coherent structures are a key component of unsteady flows in shear layers. Improvement of experimental techniques has led to larger amounts of data and requires of automated procedures for vortex tracking. Many vortex criteria are Eulerian, and identify the structures by an instantaneous local swirling motion in the field, which are indicated by closed or spiral streamlines or pathlines in a reference frame. Alternatively, a Lagrangian Coherent Structures (LCS) analysis is a Lagrangian method based on the quantities calculated along fluid particle trajectories. In the current work, vortex detection is demonstrated on data from the simulation of two cases: a 2D flow with a flat plate undergoing a 45 ° pitch-up maneuver and a 3D wall-bounded turbulence channel flow. Vortices are visualized and tracked by their centers and boundaries using Γ1, the Q criterion, and LCS saddle points. In the cases of 2D flow, saddle points trace showed a rapid acceleration of the structure which indicates the shedding from the plate. For channel flow, saddle points trace shows that average structure convection speed exhibits a similar trend as a function of wall-normal distance as the mean velocity profile, and leads to statistical quantities of vortex dynamics. Dr. Jeff Eldredge and his research group at UCLA are gratefully acknowledged for sharing the database of simulation for the current research. This work was supported by the Air Force Office of Scientific Research under AFOSR Award No. FA9550-14-1-0210.

  15. The Value of 3D Printing Models of Left Atrial Appendage Using Real-Time 3D Transesophageal Echocardiographic Data in Left Atrial Appendage Occlusion: Applications toward an Era of Truly Personalized Medicine.

    PubMed

    Liu, Peng; Liu, Rijing; Zhang, Yan; Liu, Yingfeng; Tang, Xiaoming; Cheng, Yanzhen

    The objective of this study was to assess the clinical feasibility of generating 3D printing models of left atrial appendage (LAA) using real-time 3D transesophageal echocardiogram (TEE) data for preoperative reference of LAA occlusion. Percutaneous LAA occlusion can effectively prevent patients with atrial fibrillation from stroke. However, the anatomical structure of LAA is so complicated that adequate information of its structure is essential for successful LAA occlusion. Emerging 3D printing technology has the demonstrated potential to structure more accurately than conventional imaging modalities by creating tangible patient-specific models. Typically, 3D printing data sets are acquired from CT and MRI, which may involve intravenous contrast, sedation, and ionizing radiation. It has been reported that 3D models of LAA were successfully created by the data acquired from CT. However, 3D printing of the LAA using real-time 3D TEE data has not yet been explored. Acquisition of 3D transesophageal echocardiographic data from 8 patients with atrial fibrillation was performed using the Philips EPIQ7 ultrasound system. Raw echocardiographic image data were opened in Philips QLAB and converted to 'Cartesian DICOM' format and imported into Mimics® software to create 3D models of LAA, which were printed using a rubber-like material. The printed 3D models were then used for preoperative reference and procedural simulation in LAA occlusion. We successfully printed LAAs of 8 patients. Each LAA costs approximately CNY 800-1,000 and the total process takes 16-17 h. Seven of the 8 Watchman devices predicted by preprocedural 2D TEE images were of the same sizes as those placed in the real operation. Interestingly, 3D printing models were highly reflective of the shape and size of LAAs, and all device sizes predicted by the 3D printing model were fully consistent with those placed in the real operation. Also, the 3D printed model could predict operating difficulty and the

  16. Rapid, High-Throughput Tracking of Bacterial Motility in 3D via Phase-Contrast Holographic Video Microscopy

    PubMed Central

    Cheong, Fook Chiong; Wong, Chui Ching; Gao, YunFeng; Nai, Mui Hoon; Cui, Yidan; Park, Sungsu; Kenney, Linda J.; Lim, Chwee Teck

    2015-01-01

    Tracking fast-swimming bacteria in three dimensions can be extremely challenging with current optical techniques and a microscopic approach that can rapidly acquire volumetric information is required. Here, we introduce phase-contrast holographic video microscopy as a solution for the simultaneous tracking of multiple fast moving cells in three dimensions. This technique uses interference patterns formed between the scattered and the incident field to infer the three-dimensional (3D) position and size of bacteria. Using this optical approach, motility dynamics of multiple bacteria in three dimensions, such as speed and turn angles, can be obtained within minutes. We demonstrated the feasibility of this method by effectively tracking multiple bacteria species, including Escherichia coli, Agrobacterium tumefaciens, and Pseudomonas aeruginosa. In addition, we combined our fast 3D imaging technique with a microfluidic device to present an example of a drug/chemical assay to study effects on bacterial motility. PMID:25762336

  17. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  18. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    NASA Astrophysics Data System (ADS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin, Vibin

    2008-09-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  19. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CTmore » image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.« less

  20. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery.

    PubMed

    Li, Ruijiang; Fahimian, Benjamin P; Xing, Lei

    2011-07-01

    Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a "plug-and-play" fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not statistically significant. The proposed

  1. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of thesemore » methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.« less

  2. Four-dimensional (4D) tracking of high-temperature microparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui, E-mail: zwang@lanl.gov; Liu, Q.; Waganaar, W.

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  3. Four-dimensional (4D) tracking of high-temperature microparticles

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Liu, Q.; Waganaar, W.; Fontanese, J.; James, D.; Munsat, T.

    2016-11-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  4. Four-dimensional (4D) tracking of high-temperature microparticles

    DOE PAGES

    Wang, Zhehui; Liu, Qiuguang; Waganaar, Bill; ...

    2016-07-08

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. As a result, velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  5. Four-dimensional (4D) tracking of high-temperature microparticles.

    PubMed

    Wang, Zhehui; Liu, Q; Waganaar, W; Fontanese, J; James, D; Munsat, T

    2016-11-01

    High-speed tracking of hot and molten microparticles in motion provides rich information about burning plasmas in magnetic fusion. An exploding-wire apparatus is used to produce moving high-temperature metallic microparticles and to develop four-dimensional (4D) or time-resolved 3D particle tracking techniques. The pinhole camera model and algorithms developed for computer vision are used for scene calibration and 4D reconstructions. 3D positions and velocities are then derived for different microparticles. Velocity resolution approaches 0.1 m/s by using the local constant velocity approximation.

  6. Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO

    PubMed Central

    Braaf, Boy; Vienola, Kari V.; Sheehy, Christy K.; Yang, Qiang; Vermeer, Koenraad A.; Tiruveedhula, Pavan; Arathorn, David W.; Roorda, Austin; de Boer, Johannes F.

    2012-01-01

    In phase-resolved OCT angiography blood flow is detected from phase changes in between A-scans that are obtained from the same location. In ophthalmology, this technique is vulnerable to eye motion. We address this problem by combining inter-B-scan phase-resolved OCT angiography with real-time eye tracking. A tracking scanning laser ophthalmoscope (TSLO) at 840 nm provided eye tracking functionality and was combined with a phase-stabilized optical frequency domain imaging (OFDI) system at 1040 nm. Real-time eye tracking corrected eye drift and prevented discontinuity artifacts from (micro)saccadic eye motion in OCT angiograms. This improved the OCT spot stability on the retina and consequently reduced the phase-noise, thereby enabling the detection of slower blood flows by extending the inter-B-scan time interval. In addition, eye tracking enabled the easy compounding of multiple data sets from the fovea of a healthy volunteer to create high-quality eye motion artifact-free angiograms. High-quality images are presented of two distinct layers of vasculature in the retina and the dense vasculature of the choroid. Additionally we present, for the first time, a phase-resolved OCT angiogram of the mesh-like network of the choriocapillaris containing typical pore openings. PMID:23304647

  7. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  8. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  9. Magneto-optical tracking of flexible laparoscopic ultrasound: model-based online detection and correction of magnetic tracking errors.

    PubMed

    Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir

    2009-06-01

    Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.

  10. Development and Evaluation of Real-Time Volumetric Compton Gamma-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Barnowski, Ross Wegner

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. The real-time tracking allows the imager to be moved throughout the environment or around a particular object of interest, obtaining the multiple perspectives necessary for standoff 3D imaging. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, can be incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and two different mobile gamma-ray imaging platforms. The first is a cart-based imaging platform known as the Volumetric Compton Imager (VCI), comprising two 3D position-sensitive high purity germanium (HPGe) detectors, exhibiting excellent gamma-ray imaging characteristics, but with limited mobility due to the size and weight of the cart. The second system is the High Efficiency Multimodal Imager (HEMI) a hand-portable gamma-ray imager comprising 96 individual cm3 CdZnTe crystals arranged in a two-plane, active-mask configuration. The HEMI instrument has poorer energy and angular resolution than the VCI, but is truly hand-portable, allowing the SDF concept to be tested in multiple environments and for more challenging imaging scenarios. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. Each of the two mobile imaging systems are used to demonstrate SDF for a variety of scenarios, including

  11. Design and Testing of a Smartphone Application for Real-Time Self-Tracking Diabetes Self-Management Behaviors.

    PubMed

    Groat, Danielle; Soni, Hiral; Grando, Maria Adela; Thompson, Bithika; Kaufman, David; Cook, Curtiss B

    2018-04-01

     Type 1 diabetes (T1D) care requires multiple daily self-management behaviors (SMBs). Preliminary studies on SMBs rely mainly on self-reported survey and interview data. There is little information on adult T1D SMBs, along with corresponding compensation techniques (CTs), gathered in real-time.  The article aims to use a patient-centered approach to design iDECIDE, a smartphone application that gathers daily diabetes SMBs and CTs related to meal and alcohol intake and exercise in real-time, and contrast patients' actual behaviors against those self-reported with the app.  Two usability studies were used to improve iDECIDE's functionality. These were followed by a 30-day pilot test of the redesigned app. A survey designed to capture diabetes SMBs and CTs was administered prior to the 30-day pilot test. Survey results were compared against iDECIDE logs.  Usability studies revealed that participants desired advanced features for self-tracking meals and alcohol intake. Thirteen participants recorded over 1,200 CTs for carbohydrates during the 30-day study. Participants also recorded 76 alcohol and 166 exercise CTs. Comparisons of survey responses and iDECIDE logs showed mean% (standard deviation) concordance of 77% (25) for SMBs related to meals, where concordance of 100% indicates a perfect match. There was low concordance of 35% (35) and 46% (41) for alcohol and exercise events, respectively.  The high variability found in SMBs and CTs highlights the need for real-time diabetes self-tracking mechanisms to better understand SMBs and CTs. Future work will use the developed app to collect SMBs and CTs and identify patient-specific diabetes adherence barriers that could be addressed with individualized education interventions. Schattauer GmbH Stuttgart.

  12. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    PubMed

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  13. Real-time motion compensation for EM bronchoscope tracking with smooth output - ex-vivo validation

    NASA Astrophysics Data System (ADS)

    Reichl, Tobias; Gergel, Ingmar; Menzel, Manuela; Hautmann, Hubert; Wegner, Ingmar; Meinzer, Hans-Peter; Navab, Nassir

    2012-02-01

    Navigated bronchoscopy provides benefits for endoscopists and patients, but accurate tracking information is needed. We present a novel real-time approach for bronchoscope tracking combining electromagnetic (EM) tracking, airway segmentation, and a continuous model of output. We augment a previously published approach by including segmentation information in the tracking optimization instead of image similarity. Thus, the new approach is feasible in real-time. Since the true bronchoscope trajectory is continuous, the output is modeled using splines and the control points are optimized with respect to displacement from EM tracking measurements and spatial relation to segmented airways. Accuracy of the proposed method and its components is evaluated on a ventilated porcine ex-vivo lung with respect to ground truth data acquired from a human expert. We demonstrate the robustness of the output of the proposed method against added artificial noise in the input data. Smoothness in terms of inter-frame distance is shown to remain below 2 mm, even when up to 5 mm of Gaussian noise are added to the input. The approach is shown to be easily extensible to include other measures like image similarity.

  14. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    PubMed

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  15. Real-time image processing for particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Kreizer, Mark; Ratner, David; Liberzon, Alex

    2010-01-01

    We present a novel high-speed particle tracking velocimetry (PTV) experimental system. Its novelty is due to the FPGA-based, real-time image processing "on camera". Instead of an image, the camera transfers to the computer using a network card, only the relevant information of the identified flow tracers. Therefore, the system is ideal for the remote particle tracking systems in research and industrial applications, while the camera can be controlled and data can be transferred over any high-bandwidth network. We present the hardware and the open source software aspects of the PTV experiments. The tracking results of the new experimental system has been compared to the flow visualization and particle image velocimetry measurements. The canonical flow in the central cross section of a a cubic cavity (1:1:1 aspect ratio) in our lid-driven cavity apparatus is used for validation purposes. The downstream secondary eddy (DSE) is the sensitive portion of this flow and its size was measured with increasing Reynolds number (via increasing belt velocity). The size of DSE estimated from the flow visualization, PIV and compressed PTV is shown to agree within the experimental uncertainty of the methods applied.

  16. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  17. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  18. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  19. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures

    PubMed Central

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.

    2012-01-01

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient’s skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures. PMID:24027616

  20. Use of a graphics processing unit (GPU) to facilitate real-time 3D graphic presentation of the patient skin-dose distribution during fluoroscopic interventional procedures.

    PubMed

    Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R

    2012-02-23

    We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.

  1. Target tracking and 3D trajectory acquisition of cabbage butterfly (P. rapae) based on the KCF-BS algorithm.

    PubMed

    Guo, Yang-Yang; He, Dong-Jian; Liu, Cong

    2018-06-25

    Insect behaviour is an important research topic in plant protection. To study insect behaviour accurately, it is necessary to observe and record their flight trajectory quantitatively and precisely in three dimensions (3D). The goal of this research was to analyse frames extracted from videos using Kernelized Correlation Filters (KCF) and Background Subtraction (BS) (KCF-BS) to plot the 3D trajectory of cabbage butterfly (P. rapae). Considering the experimental environment with a wind tunnel, a quadrature binocular vision insect video capture system was designed and applied in this study. The KCF-BS algorithm was used to track the butterfly in video frames and obtain coordinates of the target centroid in two videos. Finally the 3D trajectory was calculated according to the matching relationship in the corresponding frames of two angles in the video. To verify the validity of the KCF-BS algorithm, Compressive Tracking (CT) and Spatio-Temporal Context Learning (STC) algorithms were performed. The results revealed that the KCF-BS tracking algorithm performed more favourably than CT and STC in terms of accuracy and robustness.

  2. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  3. Quantification of functional mitral regurgitation by real-time 3D echocardiography: comparison with 3D velocity-encoded cardiac magnetic resonance.

    PubMed

    Marsan, Nina Ajmone; Westenberg, Jos J M; Ypenburg, Claudia; Delgado, Victoria; van Bommel, Rutger J; Roes, Stijntje D; Nucifora, Gaetano; van der Geest, Rob J; de Roos, Albert; Reiber, Johan C; Schalij, Martin J; Bax, Jeroen J

    2009-11-01

    The aim of this study was to evaluate feasibility and accuracy of real-time 3-dimensional (3D) echocardiography for quantification of mitral regurgitation (MR), in a head-to-head comparison with velocity-encoded cardiac magnetic resonance (VE-CMR). Accurate grading of MR severity is crucial for appropriate patient management but remains challenging. VE-CMR with 3D three-directional acquisition has been recently proposed as the reference method. A total of 64 patients with functional MR were included. A VE-CMR acquisition was applied to quantify mitral regurgitant volume (Rvol). Color Doppler 3D echocardiography was applied for direct measurement, in "en face" view, of mitral effective regurgitant orifice area (EROA); Rvol was subsequently calculated as EROA multiplied by the velocity-time integral of the regurgitant jet on the continuous-wave Doppler. To assess the relative potential error of the conventional approach, color Doppler 2-dimensional (2D) echocardiography was performed: vena contracta width was measured in the 4-chamber view and EROA calculated as circular (EROA-4CH); EROA was also calculated as elliptical (EROA-elliptical), measuring vena contracta also in the 2-chamber view. From these 2D measurements of EROA, the Rvols were also calculated. The EROA measured by 3D echocardiography was significantly higher than EROA-4CH (p < 0.001) and EROA-elliptical (p < 0.001), with a significant bias between these measurements (0.10 cm(2) and 0.06 cm(2), respectively). Rvol measured by 3D echocardiography showed excellent correlation with Rvol measured by CMR (r = 0.94), without a significant difference between these techniques (mean difference = -0.08 ml/beat). Conversely, 2D echocardiographic approach from the 4-chamber view significantly underestimated Rvol (p = 0.006) as compared with CMR (mean difference = 2.9 ml/beat). The 2D elliptical approach demonstrated a better agreement with CMR (mean difference = -1.6 ml/beat, p = 0.04). Quantification of EROA and

  4. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    PubMed

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding

  5. Real-time 3D human pose recognition from reconstructed volume via voxel classifiers

    NASA Astrophysics Data System (ADS)

    Yoo, ByungIn; Choi, Changkyu; Han, Jae-Joon; Lee, Changkyo; Kim, Wonjun; Suh, Sungjoo; Park, Dusik; Kim, Junmo

    2014-03-01

    This paper presents a human pose recognition method which simultaneously reconstructs a human volume based on ensemble of voxel classifiers from a single depth image in real-time. The human pose recognition is a difficult task since a single depth camera can capture only visible surfaces of a human body. In order to recognize invisible (self-occluded) surfaces of a human body, the proposed algorithm employs voxel classifiers trained with multi-layered synthetic voxels. Specifically, ray-casting onto a volumetric human model generates a synthetic voxel, where voxel consists of a 3D position and ID corresponding to the body part. The synthesized volumetric data which contain both visible and invisible body voxels are utilized to train the voxel classifiers. As a result, the voxel classifiers not only identify the visible voxels but also reconstruct the 3D positions and the IDs of the invisible voxels. The experimental results show improved performance on estimating the human poses due to the capability of inferring the invisible human body voxels. It is expected that the proposed algorithm can be applied to many fields such as telepresence, gaming, virtual fitting, wellness business, and real 3D contents control on real 3D displays.

  6. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  7. GPU acceleration towards real-time image reconstruction in 3D tomographic diffractive microscopy

    NASA Astrophysics Data System (ADS)

    Bailleul, J.; Simon, B.; Debailleul, M.; Liu, H.; Haeberlé, O.

    2012-06-01

    Phase microscopy techniques regained interest in allowing for the observation of unprepared specimens with excellent temporal resolution. Tomographic diffractive microscopy is an extension of holographic microscopy which permits 3D observations with a finer resolution than incoherent light microscopes. Specimens are imaged by a series of 2D holograms: their accumulation progressively fills the range of frequencies of the specimen in Fourier space. A 3D inverse FFT eventually provides a spatial image of the specimen. Consequently, acquisition then reconstruction are mandatory to produce an image that could prelude real-time control of the observed specimen. The MIPS Laboratory has built a tomographic diffractive microscope with an unsurpassed 130nm resolution but a low imaging speed - no less than one minute. Afterwards, a high-end PC reconstructs the 3D image in 20 seconds. We now expect an interactive system providing preview images during the acquisition for monitoring purposes. We first present a prototype implementing this solution on CPU: acquisition and reconstruction are tied in a producer-consumer scheme, sharing common data into CPU memory. Then we present a prototype dispatching some reconstruction tasks to GPU in order to take advantage of SIMDparallelization for FFT and higher bandwidth for filtering operations. The CPU scheme takes 6 seconds for a 3D image update while the GPU scheme can go down to 2 or > 1 seconds depending on the GPU class. This opens opportunities for 4D imaging of living organisms or crystallization processes. We also consider the relevance of GPU for 3D image interaction in our specific conditions.

  8. Automated tracking and quantification of angiogenic vessel formation in 3D microfluidic devices.

    PubMed

    Wang, Mengmeng; Ong, Lee-Ling Sharon; Dauwels, Justin; Asada, H Harry

    2017-01-01

    Angiogenesis, the growth of new blood vessels from pre-existing vessels, is a critical step in cancer invasion. Better understanding of the angiogenic mechanisms is required to develop effective antiangiogenic therapies for cancer treatment. We culture angiogenic vessels in 3D microfluidic devices under different Sphingosin-1-phosphate (S1P) conditions and develop an automated vessel formation tracking system (AVFTS) to track the angiogenic vessel formation and extract quantitative vessel information from the experimental time-lapse phase contrast images. The proposed AVFTS first preprocesses the experimental images, then applies a distance transform and an augmented fast marching method in skeletonization, and finally implements the Hungarian method in branch tracking. When applying the AVFTS to our experimental data, we achieve 97.3% precision and 93.9% recall by comparing with the ground truth obtained from manual tracking by visual inspection. This system enables biologists to quantitatively compare the influence of different growth factors. Specifically, we conclude that the positive S1P gradient increases cell migration and vessel elongation, leading to a higher probability for branching to occur. The AVFTS is also applicable to distinguish tip and stalk cells by considering the relative cell locations in a branch. Moreover, we generate a novel type of cell lineage plot, which not only provides cell migration and proliferation histories but also demonstrates cell phenotypic changes and branch information.

  9. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  10. Real-time interactive 3D manipulation of particles viewed in two orthogonal observation planes.

    PubMed

    Perch-Nielsen, Ivan; Rodrigo, Peter; Glückstad, Jesper

    2005-04-18

    The generalized phase contrast (GPC) method has been applied to transform a single TEM00 beam into a manifold of counterpropagating-beam traps capable of real-time interactive manipulation of multiple microparticles in three dimensions (3D). This paper reports on the use of low numerical aperture (NA), non-immersion, objective lenses in an implementation of the GPC-based 3D trapping system. Contrary to high-NA based optical tweezers, the GPC trapping system demonstrated here operates with long working distance (>10 mm), and offers a wider manipulation region and a larger field of view for imaging through each of the two opposing objective lenses. As a consequence of the large working distance, simultaneous monitoring of the trapped particles in a second orthogonal observation plane is demonstrated.

  11. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  12. A novel 3D micron-scale DPTV (Defocused Particle Tracking Velocimetry) and its applications in microfluidic devices

    NASA Astrophysics Data System (ADS)

    Roberts, John

    2005-11-01

    The rapid advancements in micro/nano biotechnology demand quantitative tools for characterizing microfluidic flows in lab-on-a-chip applications, validation of computational results for fully 3D flows in complex micro-devices, and efficient observation of cellular dynamics in 3D. We present a novel 3D micron-scale DPTV (defocused particle tracking velocimetry) that is capable of mapping out 3D Lagrangian, as well as 3D Eulerian velocity flow fields at sub-micron resolution and with one camera. The main part of the imaging system is an epi-fluorescent microscope (Olympus IX 51), and the seeding particles are fluorescent particles with diameter range 300nm - 10um. A software package has been developed for identifying (x,y,z,t) coordinates of the particles using the defocused images. Using the imaging system, we successfully mapped the pressure driven flow fields in microfluidic channels. In particular, we measured the Laglangian flow fields in a microfluidic channel with a herring bone pattern at the bottom, the later is used to enhance fluid mixing in lateral directions. The 3D particle tracks revealed the flow structure that has only been seen in numerical computation. This work is supported by the National Science Foundation (CTS - 0514443), the Nanobiotechnology Center at Cornell, and The New York State Center for Life Science Enterprise.

  13. 3D Visualization of near real-time remote-sensing observation for hurricanes field campaign using Google Earth API

    NASA Astrophysics Data System (ADS)

    Li, P.; Turk, J.; Vu, Q.; Knosp, B.; Hristova-Veleva, S. M.; Lambrigtsen, B.; Poulsen, W. L.; Licata, S.

    2009-12-01

    NASA is planning a new field experiment, the Genesis and Rapid Intensification Processes (GRIP), in the summer of 2010 to better understand how tropical storms form and develop into major hurricanes. The DC-8 aircraft and the Global Hawk Unmanned Airborne System (UAS) will be deployed loaded with instruments for measurements including lightning, temperature, 3D wind, precipitation, liquid and ice water contents, aerosol and cloud profiles. During the field campaign, both the spaceborne and the airborne observations will be collected in real-time and integrated with the hurricane forecast models. This observation-model integration will help the campaign achieve its science goals by allowing team members to effectively plan the mission with current forecasts. To support the GRIP experiment, JPL developed a website for interactive visualization of all related remote-sensing observations in the GRIP’s geographical domain using the new Google Earth API. All the observations are collected in near real-time (NRT) with 2 to 5 hour latency. The observations include a 1KM blended Sea Surface Temperature (SST) map from GHRSST L2P products; 6-hour composite images of GOES IR; stability indices, temperature and vapor profiles from AIRS and AMSU-B; microwave brightness temperature and rain index maps from AMSR-E, SSMI and TRMM-TMI; ocean surface wind vectors, vorticity and divergence of the wind from QuikSCAT; the 3D precipitation structure from TRMM-PR and vertical profiles of cloud and precipitation from CloudSAT. All the NRT observations are collected from the data centers and science facilities at NASA and NOAA, subsetted, re-projected, and composited into hourly or daily data products depending on the frequency of the observation. The data products are then displayed on the 3D Google Earth plug-in at the JPL Tropical Cyclone Information System (TCIS) website. The data products offered by the TCIS in the Google Earth display include image overlays, wind vectors, clickable

  14. 3D real-time visualization of blood flow in cerebral aneurysms by light field particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart

    2016-04-01

    Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of

  15. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  16. Flexible robotics with electromagnetic tracking improves safety and efficiency during in vitro endovascular navigation.

    PubMed

    Schwein, Adeline; Kramer, Ben; Chinnadurai, Ponraj; Walker, Sean; O'Malley, Marcia; Lumsden, Alan; Bismuth, Jean

    2017-02-01

    One limitation of the use of robotic catheters is the lack of real-time three-dimensional (3D) localization and position updating: they are still navigated based on two-dimensional (2D) X-ray fluoroscopic projection images. Our goal was to evaluate whether incorporating an electromagnetic (EM) sensor on a robotic catheter tip could improve endovascular navigation. Six users were tasked to navigate using a robotic catheter with incorporated EM sensors in an aortic aneurysm phantom. All users cannulated two anatomic targets (left renal artery and posterior "gate") using four visualization modes: (1) standard fluoroscopy mode (control), (2) 2D fluoroscopy mode showing real-time virtual catheter orientation from EM tracking, (3) 3D model of the phantom with anteroposterior and endoluminal view, and (4) 3D model with anteroposterior and lateral view. Standard X-ray fluoroscopy was always available. Cannulation and fluoroscopy times were noted for every mode. 3D positions of the EM tip sensor were recorded at 4 Hz to establish kinematic metrics. The EM sensor-incorporated catheter navigated as expected according to all users. The success rate for cannulation was 100%. For the posterior gate target, mean cannulation times in minutes:seconds were 8:12, 4:19, 4:29, and 3:09, respectively, for modes 1, 2, 3 and 4 (P = .013), and mean fluoroscopy times were 274, 20, 29, and 2 seconds, respectively (P = .001). 3D path lengths, spectral arc length, root mean dimensionless jerk, and number of submovements were significantly improved when EM tracking was used (P < .05), showing higher quality of catheter movement with EM navigation. The EM tracked robotic catheter allowed better real-time 3D orientation, facilitating navigation, with a reduction in cannulation and fluoroscopy times and improvement of motion consistency and efficiency. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  17. A new 3D tracking method for cell mechanics investigation exploiting the capabilities of digital holography in microscopy

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Merola, F.; Fusco, S.; Netti, P. A.; Ferraro, P.

    2014-03-01

    A method for 3D tracking has been developed exploiting Digital Holography features in Microscopy (DHM). In the framework of self-consistent platform for manipulation and measurement of biological specimen we use DHM for quantitative and completely label free analysis of samples with low amplitude contrast. Tracking capability extend the potentiality of DHM allowing to monitor the motion of appropriate probes and correlate it with sample properties. Complete 3D tracking has been obtained for the probes avoiding the amplitude refocusing in traditional tracking processes. Moreover, in biology and biomedical research fields one of the main topic is the understanding of morphology and mechanics of cells and microorganisms. Biological samples present low amplitude contrast that limits the information that can be retrieved through optical bright-field microscope measurements. The main effect on light propagating in such objects is in phase. This is known as phase-retardation or phase-shift. DHM is an innovative and alternative approach in microscopy, it's a good candidate for no-invasive and complete specimen analysis because its main characteristic is the possibility to discern between intensity and phase information performing quantitative mapping of the Optical Path Length. In this paper, the flexibility of DH is employed to analyze cell mechanics of unstained cells subjected to appropriate stimuli. DHM is used to measure all the parameters useful to understand the deformations induced by external and controlled stresses on in-vitro cells. Our configuration allows 3D tracking of micro-particles and, simultaneously, furnish quantitative phase-contrast maps. Experimental results are presented and discussed for in vitro cells.

  18. Eyes on the Earth 3D

    NASA Technical Reports Server (NTRS)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  19. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    PubMed

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  20. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  1. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  2. Simultaneous 3D tracking of passive tracers and microtubule bundles in an active gel

    NASA Astrophysics Data System (ADS)

    Fan, Yi; Breuer, Kenneth S.; Fluids Team

    Kinesin-driven microtubule bundles generate a spontaneous flow in unconfined geometries. They exhibit properties of active matter, including the emergence of collective motion, reduction of apparent viscosity and consumption of local energy. Here we present results from 3D tracking of passive tracers (using Airy rings and 3D scanning) synchronized with 3D measurement of the microtubule bundles motion. This technique is applied to measure viscosity variation and collective flow in a confined geometry with particular attention paid to the self-pumping system recently reported by Wu et al. (2016). Results show that the viscosity in an equilibrium microtubule network is around half that of the isotropic unbundled microtubule solution. Cross-correlations of the active microtubule network and passive tracers define a neighborhood around microtubule bundles in which passive tracers are effectively transported. MRSEC NSF.

  3. 3D segmentation of kidney tumors from freehand 2D ultrasound

    NASA Astrophysics Data System (ADS)

    Ahmad, Anis; Cool, Derek; Chew, Ben H.; Pautler, Stephen E.; Peters, Terry M.

    2006-03-01

    To completely remove a tumor from a diseased kidney, while minimizing the resection of healthy tissue, the surgeon must be able to accurately determine its location, size and shape. Currently, the surgeon mentally estimates these parameters by examining pre-operative Computed Tomography (CT) images of the patient's anatomy. However, these images do not reflect the state of the abdomen or organ during surgery. Furthermore, these images can be difficult to place in proper clinical context. We propose using Ultrasound (US) to acquire images of the tumor and the surrounding tissues in real-time, then segmenting these US images to present the tumor as a three dimensional (3D) surface. Given the common use of laparoscopic procedures that inhibit the range of motion of the operator, we propose segmenting arbitrarily placed and oriented US slices individually using a tracked US probe. Given the known location and orientation of the US probe, we can assign 3D coordinates to the segmented slices and use them as input to a 3D surface reconstruction algorithm. We have implemented two approaches for 3D segmentation from freehand 2D ultrasound. Each approach was evaluated on a tissue-mimicking phantom of a kidney tumor. The performance of our approach was determined by measuring RMS surface error between the segmentation and the known gold standard and was found to be below 0.8 mm.

  4. D Model Visualization Enhancements in Real-Time Game Engines

    NASA Astrophysics Data System (ADS)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including Direct

  5. Quantitative 3-d diagnostic ultrasound imaging using a modified transducer array and an automated image tracking technique.

    PubMed

    Hossack, John A; Sumanaweera, Thilaka S; Napel, Sandy; Ha, Jun S

    2002-08-01

    An approach for acquiring dimensionally accurate three-dimensional (3-D) ultrasound data from multiple 2-D image planes is presented. This is based on the use of a modified linear-phased array comprising a central imaging array that acquires multiple, essentially parallel, 2-D slices as the transducer is translated over the tissue of interest. Small, perpendicularly oriented, tracking arrays are integrally mounted on each end of the imaging transducer. As the transducer is translated in an elevational direction with respect to the central imaging array, the images obtained by the tracking arrays remain largely coplanar. The motion between successive tracking images is determined using a minimum sum of absolute difference (MSAD) image matching technique with subpixel matching resolution. An initial phantom scanning-based test of a prototype 8 MHz array indicates that linear dimensional accuracy of 4.6% (2 sigma) is achievable. This result compares favorably with those obtained using an assumed average velocity [31.5% (2 sigma) accuracy] and using an approach based on measuring image-to-image decorrelation [8.4% (2 sigma) accuracy]. The prototype array and imaging system were also tested in a clinical environment, and early results suggest that the approach has the potential to enable a low cost, rapid, screening method for detecting carotid artery stenosis. The average time for performing a screening test for carotid stenosis was reduced from an average of 45 minutes using 2-D duplex Doppler to 12 minutes using the new 3-D scanning approach.

  6. Improvements to the ShipIR/NTCS adaptive track gate algorithm and 3D flare particle model

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Srinivasan; Vaitekunas, David A.; Gunter, Willem H.; February, Faith J.

    2017-05-01

    A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and gate the selected target to further improve tracker performance. Similarly, a key component in any soft-kill response to an incoming guided missile is the flare/chaff decoy used to distract or seduce the seeker homing system away from the naval platform. This paper describes the recent improvements to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). Efforts to analyse and match the 3D flare particle model against actual IR measurements of the Chemring TALOS IR round resulted in further refinement of the 3D flare particle distribution. The changes in the flare model characteristics were significant enough to require an overhaul to the adaptive track gate (ATG) algorithm in the way it detects the presence of flare decoys and reacquires the target after flare separation. A series of test scenarios are used to demonstrate the impact of the new flare and ATG on IR tactics simulation.

  7. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  8. SU-D-201-05: On the Automatic Recognition of Patient Safety Hazards in a Radiotherapy Setup Using a Novel 3D Camera System and a Deep Learning Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanam, A; Min, Y; Beron, P

    Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter.more » To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.« less

  9. Object Tracking and Target Reacquisition Based on 3-D Range Data for Moving Vehicles

    PubMed Central

    Lee, Jehoon; Lankton, Shawn; Tannenbaum, Allen

    2013-01-01

    In this paper, we propose an approach for tracking an object of interest based on 3-D range data. We employ particle filtering and active contours to simultaneously estimate the global motion of the object and its local deformations. The proposed algorithm takes advantage of range information to deal with the challenging (but common) situation in which the tracked object disappears from the image domain entirely and reappears later. To cope with this problem, a method based on principle component analysis (PCA) of shape information is proposed. In the proposed method, if the target disappears out of frame, shape similarity energy is used to detect target candidates that match a template shape learned online from previously observed frames. Thus, we require no a priori knowledge of the target’s shape. Experimental results show the practical applicability and robustness of the proposed algorithm in realistic tracking scenarios. PMID:21486717

  10. Three-dimensional face pose detection and tracking using monocular videos: tool and application.

    PubMed

    Dornaika, Fadi; Raducanu, Bogdan

    2009-08-01

    Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.

  11. 3D Cell Printed Tissue Analogues: A New Platform for Theranostics

    PubMed Central

    Choi, Yeong-Jin; Yi, Hee-Gyeong; Kim, Seok-Won; Cho, Dong-Woo

    2017-01-01

    Stem cell theranostics has received much attention for noninvasively monitoring and tracing transplanted therapeutic stem cells through imaging agents and imaging modalities. Despite the excellent regenerative capability of stem cells, their efficacy has been limited due to low cellular retention, low survival rate, and low engraftment after implantation. Three-dimensional (3D) cell printing provides stem cells with the similar architecture and microenvironment of the native tissue and facilitates the generation of a 3D tissue-like construct that exhibits remarkable regenerative capacity and functionality as well as enhanced cell viability. Thus, 3D cell printing can overcome the current concerns of stem cell therapy by delivering the 3D construct to the damaged site. Despite the advantages of 3D cell printing, the in vivo and in vitro tracking and monitoring of the performance of 3D cell printed tissue in a noninvasive and real-time manner have not been thoroughly studied. In this review, we explore the recent progress in 3D cell technology and its applications. Finally, we investigate their potential limitations and suggest future perspectives on 3D cell printing and stem cell theranostics. PMID:28839468

  12. Precise 3D Track Reconstruction Algorithm for the ICARUS T600 Liquid Argon Time Projection Chamber Detector

    DOE PAGES

    Antonello, M.; Baibussinov, B.; Benetti, P.; ...

    2013-01-15

    Liquid Argon Time Projection Chamber (LAr TPC) detectors offer charged particle imaging capability with remarkable spatial resolution. Precise event reconstruction procedures are critical in order to fully exploit the potential of this technology. In this paper we present a new, general approach to 3D reconstruction for the LAr TPC with a practical application to the track reconstruction. The efficiency of the method is evaluated on a sample of simulated tracks. We present also the application of the method to the analysis of stopping particle tracks collected during the ICARUS T600 detector operation with the CNGS neutrino beam.

  13. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  14. Real-time non-rigid target tracking for ultrasound-guided clinical interventions

    NASA Astrophysics Data System (ADS)

    Zachiu, C.; Ries, M.; Ramaekers, P.; Guey, J.-L.; Moonen, C. T. W.; de Senneville, B. Denis

    2017-10-01

    Biological motion is a problem for non- or mini-invasive interventions when conducted in mobile/deformable organs due to the targeted pathology moving/deforming with the organ. This may lead to high miss rates and/or incomplete treatment of the pathology. Therefore, real-time tracking of the target anatomy during the intervention would be beneficial for such applications. Since the aforementioned interventions are often conducted under B-mode ultrasound (US) guidance, target tracking can be achieved via image registration, by comparing the acquired US images to a separate image established as positional reference. However, such US images are intrinsically altered by speckle noise, introducing incoherent gray-level intensity variations. This may prove problematic for existing intensity-based registration methods. In the current study we address US-based target tracking by employing the recently proposed EVolution registration algorithm. The method is, by construction, robust to transient gray-level intensities. Instead of directly matching image intensities, EVolution aligns similar contrast patterns in the images. Moreover, the displacement is computed by evaluating a matching criterion for image sub-regions rather than on a point-by-point basis, which typically provides more robust motion estimates. However, unlike similar previously published approaches, which assume rigid displacements in the image sub-regions, the EVolution algorithm integrates the matching criterion in a global functional, allowing the estimation of an elastic dense deformation. The approach was validated for soft tissue tracking under free-breathing conditions on the abdomen of seven healthy volunteers. Contact echography was performed on all volunteers, while three of the volunteers also underwent standoff echography. Each of the two modalities is predominantly specific to a particular type of non- or mini-invasive clinical intervention. The method demonstrated on average an accuracy of

  15. Real-time non-rigid target tracking for ultrasound-guided clinical interventions.

    PubMed

    Zachiu, C; Ries, M; Ramaekers, P; Guey, J-L; Moonen, C T W; de Senneville, B Denis

    2017-10-04

    Biological motion is a problem for non- or mini-invasive interventions when conducted in mobile/deformable organs due to the targeted pathology moving/deforming with the organ. This may lead to high miss rates and/or incomplete treatment of the pathology. Therefore, real-time tracking of the target anatomy during the intervention would be beneficial for such applications. Since the aforementioned interventions are often conducted under B-mode ultrasound (US) guidance, target tracking can be achieved via image registration, by comparing the acquired US images to a separate image established as positional reference. However, such US images are intrinsically altered by speckle noise, introducing incoherent gray-level intensity variations. This may prove problematic for existing intensity-based registration methods. In the current study we address US-based target tracking by employing the recently proposed EVolution registration algorithm. The method is, by construction, robust to transient gray-level intensities. Instead of directly matching image intensities, EVolution aligns similar contrast patterns in the images. Moreover, the displacement is computed by evaluating a matching criterion for image sub-regions rather than on a point-by-point basis, which typically provides more robust motion estimates. However, unlike similar previously published approaches, which assume rigid displacements in the image sub-regions, the EVolution algorithm integrates the matching criterion in a global functional, allowing the estimation of an elastic dense deformation. The approach was validated for soft tissue tracking under free-breathing conditions on the abdomen of seven healthy volunteers. Contact echography was performed on all volunteers, while three of the volunteers also underwent standoff echography. Each of the two modalities is predominantly specific to a particular type of non- or mini-invasive clinical intervention. The method demonstrated on average an accuracy of

  16. Real-time moving objects detection and tracking from airborne infrared camera

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its

  17. Automated dynamic feature tracking of RSLs on the Martian surface through HiRISE super-resolution restoration and 3D reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.

    2017-09-01

    In this paper, we demonstrate novel Super-resolution restoration and 3D reconstruction tools developed within the EU FP7 projects and their applications to advanced dynamic feature tracking through HiRISE repeat stereo. We show an example with one of the RSL sites in the Palikir Crater took 8 repeat-pass 25cm HiRISE images from which a 5cm RSL-free SRR image is generated using GPT-SRR. Together with repeat 3D modelling of the same area, it allows us to overlay tracked dynamic features onto the reconstructed "original" surface, providing a much more comprehensive interpretation of the surface formation processes in 3D.

  18. 4D ultrasound speckle tracking of intra-fraction prostate motion: a phantom-based comparison with x-ray fiducial tracking using CyberKnife

    NASA Astrophysics Data System (ADS)

    O'Shea, Tuathan P.; Garcia, Leo J.; Rosser, Karen E.; Harris, Emma J.; Evans, Philip M.; Bamber, Jeffrey C.

    2014-04-01

    This study investigates the use of a mechanically-swept 3D ultrasound (3D-US) probe for soft-tissue displacement monitoring during prostate irradiation, with emphasis on quantifying the accuracy relative to CyberKnife® x-ray fiducial tracking. An US phantom, implanted with x-ray fiducial markers was placed on a motion platform and translated in 3D using five real prostate motion traces acquired using the Calypso system. Motion traces were representative of all types of motion as classified by studying Calypso data for 22 patients. The phantom was imaged using a 3D swept linear-array probe (to mimic trans-perineal imaging) and, subsequently, the kV x-ray imaging system on CyberKnife. A 3D cross-correlation block-matching algorithm was used to track speckle in the ultrasound data. Fiducial and US data were each compared with known phantom displacement. Trans-perineal 3D-US imaging could track superior-inferior (SI) and anterior-posterior (AP) motion to ≤0.81 mm root-mean-square error (RMSE) at a 1.7 Hz volume rate. The maximum kV x-ray tracking RMSE was 0.74 mm, however the prostate motion was sampled at a significantly lower imaging rate (mean: 0.04 Hz). Initial elevational (right-left RL) US displacement estimates showed reduced accuracy but could be improved (RMSE <2.0 mm) using a correlation threshold in the ultrasound tracking code to remove erroneous inter-volume displacement estimates. Mechanically-swept 3D-US can track the major components of intra-fraction prostate motion accurately but exhibits some limitations. The largest US RMSE was for elevational (RL) motion. For the AP and SI axes, accuracy was sub-millimetre. It may be feasible to track prostate motion in 2D only. 3D-US also has the potential to improve high tracking accuracy for all motion types. It would be advisable to use US in conjunction with a small (˜2.0 mm) centre-of-mass displacement threshold in which case it would be possible to take full advantage of the accuracy and high imaging

  19. Detection, 3-D positioning, and sizing of small pore defects using digital radiography and tracking

    NASA Astrophysics Data System (ADS)

    Lindgren, Erik

    2014-12-01

    This article presents an algorithm that handles the detection, positioning, and sizing of submillimeter-sized pores in welds using radiographic inspection and tracking. The possibility to detect, position, and size pores which have a low contrast-to-noise ratio increases the value of the nondestructive evaluation of welds by facilitating fatigue life predictions with lower uncertainty. In this article, a multiple hypothesis tracker with an extended Kalman filter is used to track an unknown number of pore indications in a sequence of radiographs as an object is rotated. Each pore is not required to be detected in all radiographs. In addition, in the tracking step, three-dimensional (3-D) positions of pore defects are calculated. To optimize, set up, and pre-evaluate the algorithm, the article explores a design of experimental approach in combination with synthetic radiographs of titanium laser welds containing pore defects. The pre-evaluation on synthetic radiographs at industrially reasonable contrast-to-noise ratios indicate less than 1% false detection rates at high detection rates and less than 0.1 mm of positioning errors for more than 90% of the pores. A comparison between experimental results of the presented algorithm and a computerized tomography reference measurement shows qualitatively good agreement in the 3-D positions of approximately 0.1-mm diameter pores in 5-mm-thick Ti-6242.

  20. Review of Real-Time 3-Dimensional Image Guided Radiation Therapy on Standard-Equipped Cancer Radiation Therapy Systems: Are We at the Tipping Point for the Era of Real-Time Radiation Therapy?

    PubMed

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Zhang, Pengpeng; Happersett, Laura; Bertholet, Jenny; Poulsen, Per R

    2018-04-14

    To review real-time 3-dimensional (3D) image guided radiation therapy (IGRT) on standard-equipped cancer radiation therapy systems, focusing on clinically implemented solutions. Three groups in 3 continents have clinically implemented novel real-time 3D IGRT solutions on standard-equipped linear accelerators. These technologies encompass kilovoltage, combined megavoltage-kilovoltage, and combined kilovoltage-optical imaging. The cancer sites treated span pelvic and abdominal tumors for which respiratory motion is present. For each method the 3D-measured motion during treatment is reported. After treatment, dose reconstruction was used to assess the treatment quality in the presence of motion with and without real-time 3D IGRT. The geometric accuracy was quantified through phantom experiments. A literature search was conducted to identify additional real-time 3D IGRT methods that could be clinically implemented in the near future. The real-time 3D IGRT methods were successfully clinically implemented and have been used to treat more than 200 patients. Systematic target position shifts were observed using all 3 methods. Dose reconstruction demonstrated that the delivered dose is closer to the planned dose with real-time 3D IGRT than without real-time 3D IGRT. In addition, compromised target dose coverage and variable normal tissue doses were found without real-time 3D IGRT. The geometric accuracy results with real-time 3D IGRT had a mean error of <0.5 mm and a standard deviation of <1.1 mm. Numerous additional articles exist that describe real-time 3D IGRT methods using standard-equipped radiation therapy systems that could also be clinically implemented. Multiple clinical implementations of real-time 3D IGRT on standard-equipped cancer radiation therapy systems have been demonstrated. Many more approaches that could be implemented were identified. These solutions provide a pathway for the broader adoption of methods to make radiation therapy more accurate

  1. 3D Rainbow Particle Tracking Velocimetry

    NASA Astrophysics Data System (ADS)

    Aguirre-Pablo, Andres A.; Xiong, Jinhui; Idoughi, Ramzi; Aljedaani, Abdulrahman B.; Dun, Xiong; Fu, Qiang; Thoroddsen, Sigurdur T.; Heidrich, Wolfgang

    2017-11-01

    A single color camera is used to reconstruct a 3D-3C velocity flow field. The camera is used to record the 2D (X,Y) position and colored scattered light intensity (Z) from white polyethylene tracer particles in a flow. The main advantage of using a color camera is the capability of combining different intensity levels for each color channel to obtain more depth levels. The illumination system consists of an LCD projector placed perpendicularly to the camera. Different intensity colored level gradients are projected onto the particles to encode the depth position (Z) information of each particle, benefiting from the possibility of varying the color profiles and projected frequencies up to 60 Hz. Chromatic aberrations and distortions are estimated and corrected using a 3D laser engraved calibration target. The camera-projector system characterization is presented considering size and depth position of the particles. The use of these components reduces dramatically the cost and complexity of traditional 3D-PTV systems.

  2. A tracking system to calculate patient skin dose in real-time during neurointerventional procedures using a biplane x-ray imaging system.

    PubMed

    Rana, V K; Rudin, S; Bednarek, D R

    2016-09-01

    Neurovascular interventional procedures using biplane fluoroscopic imaging systems can lead to increased risk of radiation-induced skin injuries. The authors developed a biplane dose tracking system (Biplane-DTS) to calculate the cumulative skin dose distribution from the frontal and lateral x-ray tubes and display it in real-time as a color-coded map on a 3D graphic of the patient for immediate feedback to the physician. The agreement of the calculated values with the dose measured on phantoms was evaluated. The Biplane-DTS consists of multiple components including 3D graphic models of the imaging system and patient, an interactive graphical user interface, a data acquisition module to collect geometry and exposure parameters, the computer graphics processing unit, and functions for determining which parts of the patient graphic skin surface are within the beam and for calculating dose. The dose is calculated to individual points on the patient graphic using premeasured calibration files of entrance skin dose per mAs including backscatter; corrections are applied for field area, distance from the focal spot and patient table and pad attenuation when appropriate. The agreement of the calculated patient skin dose and its spatial distribution with measured values was evaluated in 2D and 3D for simulated procedure conditions using a PMMA block phantom and an SK-150 head phantom, respectively. Dose values calculated by the Biplane-DTS were compared to the measurements made on the phantom surface with radiochromic film and a calibrated ionization chamber, which was also used to calibrate the DTS. The agreement with measurements was specifically evaluated with variation in kVp, gantry angle, and field size. The dose tracking system that was developed is able to acquire data from the two x-ray gantries on a biplane imaging system and calculate the skin dose for each exposure pulse to those vertices of a patient graphic that are determined to be in the beam. The

  3. A tracking system to calculate patient skin dose in real-time during neurointerventional procedures using a biplane x-ray imaging system

    PubMed Central

    Rana, V. K.; Rudin, S.; Bednarek, D. R.

    2016-01-01

    Purpose: Neurovascular interventional procedures using biplane fluoroscopic imaging systems can lead to increased risk of radiation-induced skin injuries. The authors developed a biplane dose tracking system (Biplane-DTS) to calculate the cumulative skin dose distribution from the frontal and lateral x-ray tubes and display it in real-time as a color-coded map on a 3D graphic of the patient for immediate feedback to the physician. The agreement of the calculated values with the dose measured on phantoms was evaluated. Methods: The Biplane-DTS consists of multiple components including 3D graphic models of the imaging system and patient, an interactive graphical user interface, a data acquisition module to collect geometry and exposure parameters, the computer graphics processing unit, and functions for determining which parts of the patient graphic skin surface are within the beam and for calculating dose. The dose is calculated to individual points on the patient graphic using premeasured calibration files of entrance skin dose per mAs including backscatter; corrections are applied for field area, distance from the focal spot and patient table and pad attenuation when appropriate. The agreement of the calculated patient skin dose and its spatial distribution with measured values was evaluated in 2D and 3D for simulated procedure conditions using a PMMA block phantom and an SK-150 head phantom, respectively. Dose values calculated by the Biplane-DTS were compared to the measurements made on the phantom surface with radiochromic film and a calibrated ionization chamber, which was also used to calibrate the DTS. The agreement with measurements was specifically evaluated with variation in kVp, gantry angle, and field size. Results: The dose tracking system that was developed is able to acquire data from the two x-ray gantries on a biplane imaging system and calculate the skin dose for each exposure pulse to those vertices of a patient graphic that are determined to be

  4. A tracking system to calculate patient skin dose in real-time during neurointerventional procedures using a biplane x-ray imaging system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, V. K., E-mail: vkrana@buffalo.edu

    Purpose: Neurovascular interventional procedures using biplane fluoroscopic imaging systems can lead to increased risk of radiation-induced skin injuries. The authors developed a biplane dose tracking system (Biplane-DTS) to calculate the cumulative skin dose distribution from the frontal and lateral x-ray tubes and display it in real-time as a color-coded map on a 3D graphic of the patient for immediate feedback to the physician. The agreement of the calculated values with the dose measured on phantoms was evaluated. Methods: The Biplane-DTS consists of multiple components including 3D graphic models of the imaging system and patient, an interactive graphical user interface, amore » data acquisition module to collect geometry and exposure parameters, the computer graphics processing unit, and functions for determining which parts of the patient graphic skin surface are within the beam and for calculating dose. The dose is calculated to individual points on the patient graphic using premeasured calibration files of entrance skin dose per mAs including backscatter; corrections are applied for field area, distance from the focal spot and patient table and pad attenuation when appropriate. The agreement of the calculated patient skin dose and its spatial distribution with measured values was evaluated in 2D and 3D for simulated procedure conditions using a PMMA block phantom and an SK-150 head phantom, respectively. Dose values calculated by the Biplane-DTS were compared to the measurements made on the phantom surface with radiochromic film and a calibrated ionization chamber, which was also used to calibrate the DTS. The agreement with measurements was specifically evaluated with variation in kVp, gantry angle, and field size. Results: The dose tracking system that was developed is able to acquire data from the two x-ray gantries on a biplane imaging system and calculate the skin dose for each exposure pulse to those vertices of a patient graphic that are

  5. Ultra-high-speed 3D astigmatic particle tracking velocimetry: application to particle-laden supersonic impinging jets

    NASA Astrophysics Data System (ADS)

    Buchmann, N. A.; Cierpka, C.; Kähler, C. J.; Soria, J.

    2014-11-01

    The paper demonstrates ultra-high-speed three-component, three-dimensional (3C3D) velocity measurements of micron-sized particles suspended in a supersonic impinging jet flow. Understanding the dynamics of individual particles in such flows is important for the design of particle impactors for drug delivery or cold gas dynamic spray processing. The underexpanded jet flow is produced via a converging nozzle, and micron-sized particles ( d p = 110 μm) are introduced into the gas flow. The supersonic jet impinges onto a flat surface, and the particle impact velocity and particle impact angle are studied for a range of flow conditions and impingement distances. The imaging system consists of an ultra-high-speed digital camera (Shimadzu HPV-1) capable of recording rates of up to 1 Mfps. Astigmatism particle tracking velocimetry (APTV) is used to measure the 3D particle position (Cierpka et al., Meas Sci Technol 21(045401):13, 2010) by coding the particle depth location in the 2D images by adding a cylindrical lens to the high-speed imaging system. Based on the reconstructed 3D particle positions, the particle trajectories are obtained via a higher-order tracking scheme that takes advantage of the high temporal resolution to increase robustness and accuracy of the measurement. It is shown that the particle velocity and impingement angle are affected by the gas flow in a manner depending on the nozzle pressure ratio and stand-off distance where higher pressure ratios and stand-off distances lead to higher impact velocities and larger impact angles.

  6. Real-time intra-fraction-motion tracking using the treatment couch: a feasibility study

    NASA Astrophysics Data System (ADS)

    D'Souza, Warren D.; Naqvi, Shahid A.; Yu, Cedric X.

    2005-09-01

    Significant differences between planned and delivered treatments may occur due to respiration-induced tumour motion, leading to underdosing of parts of the tumour and overdosing of parts of the surrounding critical structures. Existing methods proposed to counter tumour motion include breath-holds, gating and MLC-based tracking. Breath-holds and gating techniques increase treatment time considerably, whereas MLC-based tracking is limited to two dimensions. We present an alternative solution in which a robotic couch moves in real time in response to organ motion. To demonstrate proof-of-principle, we constructed a miniature adaptive couch model consisting of two movable platforms that simulate tumour motion and couch motion, respectively. These platforms were connected via an electronic feedback loop so that the bottom platform responded to the motion of the top platform. We tested our model with a seven-field step-and-shoot delivery case in which we performed three film-based experiments: (1) static geometry, (2) phantom-only motion and (3) phantom motion with simulated couch motion. Our measurements demonstrate that the miniature couch was able to compensate for phantom motion to the extent that the dose distributions were practically indistinguishable from those in static geometry. Motivated by this initial success, we investigated a real-time couch compensation system consisting of a stereoscopic infra-red camera system interfaced to a robotic couch known as the Hexapod™, which responds in real time to any change in position detected by the cameras. Optical reflectors placed on a solid water phantom were used as surrogates for motion. We tested the effectiveness of couch-based motion compensation for fixed fields and a dynamic arc delivery cases. Due to hardware limitations, we performed film-based experiments (1), (2) and (3), with the robotic couch at a phantom motion period and dose rate of 16 s and 100 MU min-1, respectively. Analysis of film measurements

  7. Novel 3-D laparoscopic magnetic ultrasound image guidance for lesion targeting

    PubMed Central

    Sindram, David; McKillop, Iain H; Martinie, John B; Iannitti, David A

    2010-01-01

    Objectives: Accurate laparoscopic liver lesion targeting for biopsy or ablation depends on the ability to merge laparoscopic and ultrasound images with proprioceptive instrument positioning, a skill that can be acquired only through extensive experience. The aim of this study was to determine whether using magnetic positional tracking to provide three-dimensional, real-time guidance improves accuracy during laparoscopic needle placement. Methods: Magnetic sensors were embedded into a needle and laparoscopic ultrasound transducer. These sensors interrupted the magnetic fields produced by an electromagnetic field generator, allowing for real-time, 3-D guidance on a stereoscopic monitor. Targets measuring 5 mm were embedded 3–5 cm deep in agar and placed inside a laparoscopic trainer box. Two novices (a college student and an intern) and two experts (hepatopancreatobiliary surgeons) targeted the lesions out of the ultrasound plane using either traditional or 3-D guidance. Results: Each subject targeted 22 lesions, 11 with traditional and 11 with the novel guidance (n = 88). Hit rates of 32% (14/44) and 100% (44/44) were observed with the traditional approach and the 3-D magnetic guidance approach, respectively. The novices were essentially unable to hit the targets using the traditional approach, but did not miss using the novel system. The hit rate of experts improved from 59% (13/22) to 100% (22/22) (P < 0.0001). Conclusions: The novel magnetic 3-D laparoscopic ultrasound guidance results in perfect targeting of 5-mm lesions, even by surgical novices. PMID:21083797

  8. D Tracking Based Augmented Reality for Cultural Heritage Data Management

    NASA Astrophysics Data System (ADS)

    Battini, C.; Landi, G.

    2015-02-01

    The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.

  9. Visual tracking of da Vinci instruments for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2014-03-01

    Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.

  10. Simultaneous 3D localization of multiple MR-visible markers in fully reconstructed MR images: proof-of-concept for subsecond position tracking.

    PubMed

    Thörmer, Gregor; Garnov, Nikita; Moche, Michael; Haase, Jürgen; Kahn, Thomas; Busse, Harald

    2012-04-01

    To determine whether a greatly reduced spatial resolution of fully reconstructed projection MR images can be used for the simultaneous 3D localization of multiple MR-visible markers and to assess the feasibility of a subsecond position tracking for clinical purposes. Miniature, inductively coupled RF coils were imaged in three orthogonal planes with a balanced steady-state free precession (SSFP) sequence and automatically localized using a two-dimensional template fitting and a subsequent three-dimensional (3D) matching of the coordinates. Precision, accuracy, speed and robustness of 3D localization were assessed for decreasing in-plane resolutions (0.6-4.7 mm). The feasibility of marker tracking was evaluated at the lowest resolution by following a robotically driven needle on a complex 3D trajectory. Average 3D precision and accuracy, sensitivity and specificity of localization ranged between 0.1 and 0.4 mm, 0.5 and 1.0 mm, 100% and 95%, and 100% and 96%, respectively. At the lowest resolution, imaging and localization took ≈350 ms and provided an accuracy of ≈1.0 mm. In the tracking experiment, the needle was clearly depicted on the oblique scan planes defined by the markers. Image-based marker localization at a greatly reduced spatial resolution is considered a feasible approach to monitor reference points or rigid instruments at subsecond update rates. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of

  12. Accuracy of a Real-Time, Computerized, Binocular, Three-Dimensional Trajectory-Tracking Device for Recording Functional Mandibular Movements

    PubMed Central

    Zhao, Tian; Yang, Huifang; Sui, Huaxin; Salvi, Satyajeet Sudhir; Wang, Yong; Sun, Yuchun

    2016-01-01

    Objective Developments in digital technology have permitted researchers to study mandibular movements. Here, the accuracy of a real-time, computerized, binocular, three-dimensional (3D) trajectory-tracking device for recording functional mandibular movements was evaluated. Methods An occlusal splint without the occlusal region was created based on a plaster cast of the lower dentition. The splint was rigidly connected with a target on its labial side and seated on the cast. The cast was then rigidly attached to the stage of a high-precision triaxial electronic translator, which was used to move the target-cast-stage complex. Half-circular movements (5.00-mm radius) in three planes (XOY, XOZ, YOZ) and linear movements along the x-axis were performed at 5.00 mm/s. All trajectory points were recorded with the binocular 3D trajectory-tracking device and fitted to arcs or lines, respectively, with the Imageware software. To analyze the accuracy of the trajectory-tracking device, the mean distances between the trajectory points and the fitted arcs or lines were measured, and the mean differences between the lengths of the fitted arcs’ radii and a set value (5.00 mm) were then calculated. A one-way analysis of variance was used to evaluate the spatial consistency of the recording accuracy in three different planes. Results The mean distances between the trajectory points and fitted arcs or lines were 0.076 ± 0.033 mm or 0.089 ± 0.014 mm. The mean difference between the lengths of the fitted arcs’ radii and the set value (5.00 mm) was 0.025 ± 0.071 mm. A one-way ANOVA showed that the recording errors in three different planes were not statistically significant. Conclusion These results suggest that the device can record certain movements at 5.00 mm/s, which is similar to the speed of functional mandibular movements. In addition, the recordings had an error of <0.1 mm and good spatial consistency. Thus, the device meets some of the requirements necessary for

  13. Real-time classification of vehicles by type within infrared imagery

    NASA Astrophysics Data System (ADS)

    Kundegorski, Mikolaj E.; Akçay, Samet; Payen de La Garanderie, Grégoire; Breckon, Toby P.

    2016-10-01

    Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.

  14. Gesture Interaction Browser-Based 3D Molecular Viewer.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.

  15. [Real-time three-dimensional (4D) ultrasound-guided prostatic biopsies on a phantom. Comparative study versus 2D guidance].

    PubMed

    Long, Jean-Alexandre; Daanen, Vincent; Moreau-Gaudry, Alexandre; Troccaz, Jocelyne; Rambeaud, Jean-Jacques; Descotes, Jean-Luc

    2007-11-01

    The objective of this study was to determine the added value of real-time three-dimensional (4D) ultrasound guidance of prostatic biopsies on a prostate phantom in terms of the precision of guidance and distribution. A prostate phantom was constructed. A real-time 3D ultrasonograph connected to a transrectal 5.9 MHz volumic transducer was used. Fourteen operators performed 336 biopsies with 2D guidance then 4D guidance according to a 12-biopsy protocol. Biopsy tracts were modelled by segmentation in a 3D ultrasound volume. Specific software allowed visualization of biopsy tracts in the reference prostate and evaluated the zone biopsied. A comparative study was performed to determine the added value of 4D guidance compared to 2D guidance by evaluating the precision of entry points and target points. The distribution was evaluated by measuring the volume investigated and by a redundancy ratio of the biopsy points. The precision of the biopsy protocol was significantly improved by 4D guidance (p = 0.037). No increase of the biopsy volume and no improvement of the distribution of biopsies were observed with 4D compared to 2D guidance. The real-time 3D ultrasound-guided prostate biopsy technique on a phantom model appears to improve the precision and reproducibility of a biopsy protocol, but the distribution of biopsies does not appear to be improved.

  16. Real Time Target Tracking in a Phantom Using Ultrasonic Imaging

    NASA Astrophysics Data System (ADS)

    Xiao, X.; Corner, G.; Huang, Z.

    In this paper we present a real-time ultrasound image guidance method suitable for tracking the motion of tumors. A 2D ultrasound based motion tracking system was evaluated. A robot was used to control the focused ultrasound and position it at the target that has been segmented from a real-time ultrasound video. Tracking accuracy and precision were investigated using a lesion mimicking phantom. Experiments have been conducted and results show sufficient efficiency of the image guidance algorithm. This work could be developed as the foundation for combining the real time ultrasound imaging tracking and MRI thermometry monitoring non-invasive surgery.

  17. Investigation of visual fatigue/discomfort generated by S3D video using eye-tracking data

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2013-03-01

    Stereoscopic 3D is undoubtedly one of the most attractive content. It has been deployed intensively during the last decade through movies and games. Among the advantages of 3D are the strong involvement of viewers and the increased feeling of presence. However, the sanitary e ects that can be generated by 3D are still not precisely known. For example, visual fatigue and visual discomfort are among symptoms that an observer may feel. In this paper, we propose an investigation of visual fatigue generated by 3D video watching, with the help of eye-tracking. From one side, a questionnaire, with the most frequent symptoms linked with 3D, is used in order to measure their variation over time. From the other side, visual characteristics such as pupil diameter, eye movements ( xations and saccades) and eye blinking have been explored thanks to data provided by the eye-tracker. The statistical analysis showed an important link between blinking duration and number of saccades with visual fatigue while pupil diameter and xations are not precise enough and are highly dependent on content. Finally, time and content play an important role in the growth of visual fatigue due to 3D watching.

  18. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  19. 3D gaze tracking method using Purkinje images on eye optical model and pupil

    NASA Astrophysics Data System (ADS)

    Lee, Ji Woo; Cho, Chul Woo; Shin, Kwang Yong; Lee, Eui Chul; Park, Kang Ryoung

    2012-05-01

    Gaze tracking is to detect the position a user is looking at. Most research on gaze estimation has focused on calculating the X, Y gaze position on a 2D plane. However, as the importance of stereoscopic displays and 3D applications has increased greatly, research into 3D gaze estimation of not only the X, Y gaze position, but also the Z gaze position has gained attention for the development of next-generation interfaces. In this paper, we propose a new method for estimating the 3D gaze position based on the illuminative reflections (Purkinje images) on the surface of the cornea and lens by considering the 3D optical structure of the human eye model. This research is novel in the following four ways compared with previous work. First, we theoretically analyze the generated models of Purkinje images based on the 3D human eye model for 3D gaze estimation. Second, the relative positions of the first and fourth Purkinje images to the pupil center, inter-distance between these two Purkinje images, and pupil size are used as the features for calculating the Z gaze position. The pupil size is used on the basis of the fact that pupil accommodation happens according to the gaze positions in the Z direction. Third, with these features as inputs, the final Z gaze position is calculated using a multi-layered perceptron (MLP). Fourth, the X, Y gaze position on the 2D plane is calculated by the position of the pupil center based on a geometric transform considering the calculated Z gaze position. Experimental results showed that the average errors of the 3D gaze estimation were about 0.96° (0.48 cm) on the X-axis, 1.60° (0.77 cm) on the Y-axis, and 4.59 cm along the Z-axis in 3D space.

  20. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system.

    PubMed

    Poulin, Eric; Racine, Emmanuel; Binnekamp, Dirk; Beaulieu, Luc

    2015-03-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora(®) Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  1. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc, E-mail: Luc.Beaulieu@phy.ulaval.ca

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position andmore » orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.« less

  2. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  3. 3D Imaging and Automated Ice Bottom Tracking of Canadian Arctic Archipelago Ice Sounding Data

    NASA Astrophysics Data System (ADS)

    Paden, J. D.; Xu, M.; Sprick, J.; Athinarapu, S.; Crandall, D.; Burgess, D. O.; Sharp, M. J.; Fox, G. C.; Leuschen, C.; Stumpf, T. M.

    2016-12-01

    The basal topography of the Canadian Arctic Archipelago ice caps is unknown for a number of the glaciers which drain the ice caps. The basal topography is needed for calculating present sea level contribution using the surface mass balance and discharge method and to understand future sea level contributions using ice flow model studies. During the NASA Operation IceBridge 2014 arctic campaign, the Multichannel Coherent Radar Depth Sounder (MCoRDS) used a three transmit beam setting (left beam, nadir beam, right beam) to illuminate a wide swath across the ice glacier in a single pass during three flights over the archipelago. In post processing we have used a combination of 3D imaging methods to produce images for each of the three beams which are then merged to produce a single digitally formed wide swath beam. Because of the high volume of data produced by 3D imaging, manual tracking of the ice bottom is impractical on a large scale. To solve this problem, we propose an automated technique for extracting ice bottom surfaces by viewing the task as an inference problem on a probabilistic graphical model. We first estimate layer boundaries to generate a seed surface, and then incorporate additional sources of evidence, such as ice masks, surface digital elevation models, and feedback from human users, to refine the surface in a discrete energy minimization formulation. We investigate the performance of the imaging and tracking algorithms using flight crossovers since crossing lines should produce consistent maps of the terrain beneath the ice surface and compare manually tracked "ground truth" to the automated tracking algorithms. We found the swath width at the nominal flight altitude of 1000 m to be approximately 3 km. Since many of the glaciers in the archipelago are narrower than this, the radar imaging, in these instances, was able to measure the full glacier cavity in a single pass.

  4. Quantitative 3D evolution of colloidal nanoparticle oxidation in solution

    DOE PAGES

    Sun, Yugang; Zuo, Xiaobing; Sankaranarayanan, Subramanian K. R. S.; ...

    2017-04-21

    Real-time tracking three-dimensional (3D) evolution of colloidal nanoparticles in solution is essential for understanding complex mechanisms involved in nanoparticle growth and transformation. We simultaneously use time-resolved small-angle and wide-angle x-ray scattering to monitor oxidation of highly uniform colloidal iron nanoparticles, enabling the reconstruction of intermediate 3D morphologies of the nanoparticles with a spatial resolution of ~5 Å. The in-situ probing combined with large-scale reactive molecular dynamics simulations reveals the transformational details from the solid metal nanoparticles to hollow metal oxide nanoshells via nanoscale Kirkendall process, for example, coalescence of voids upon their growth, reversing of mass diffusion direction depending onmore » crystallinity, and so forth. In conclusion, our results highlight the complex interplay between defect chemistry and defect dynamics in determining nanoparticle transformation and formation.« less

  5. Quantitative 3D evolution of colloidal nanoparticle oxidation in solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yugang; Zuo, Xiaobing; Sankaranarayanan, Subramanian K. R. S.

    Real-time tracking three-dimensional (3D) evolution of colloidal nanoparticles in solution is essential for understanding complex mechanisms involved in nanoparticle growth and transformation. We simultaneously use time-resolved small-angle and wide-angle x-ray scattering to monitor oxidation of highly uniform colloidal iron nanoparticles, enabling the reconstruction of intermediate 3D morphologies of the nanoparticles with a spatial resolution of ~5 Å. The in-situ probing combined with large-scale reactive molecular dynamics simulations reveals the transformational details from the solid metal nanoparticles to hollow metal oxide nanoshells via nanoscale Kirkendall process, for example, coalescence of voids upon their growth, reversing of mass diffusion direction depending onmore » crystallinity, and so forth. In conclusion, our results highlight the complex interplay between defect chemistry and defect dynamics in determining nanoparticle transformation and formation.« less

  6. Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial

    NASA Astrophysics Data System (ADS)

    Moult, E.; Burdette, E. C.; Song, D. Y.; Abolmaesumi, P.; Fichtinger, G.; Fallavollita, P.

    2011-03-01

    Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a +/-10° and +/-10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data.

  7. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  8. Real-Time 3D Fluoroscopy-Guided Large Core Needle Biopsy of Renal Masses: A Critical Early Evaluation According to the IDEAL Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroeze, Stephanie G. C.; Huisman, Merel; Verkooijen, Helena M.

    2012-06-15

    Introduction: Three-dimensional (3D) real-time fluoroscopy cone beam CT is a promising new technique for image-guided biopsy of solid tumors. We evaluated the technical feasibility, diagnostic accuracy, and complications of this technique for guidance of large-core needle biopsy in patients with suspicious renal masses. Methods: Thirteen patients with 13 suspicious renal masses underwent large-core needle biopsy under 3D real-time fluoroscopy cone beam CT guidance. Imaging acquisition and subsequent 3D reconstruction was done by a mobile flat-panel detector (FD) C-arm system to plan the needle path. Large-core needle biopsies were taken by the interventional radiologist. Technical success, accuracy, and safety were evaluatedmore » according to the Innovation, Development, Exploration, Assessment, Long-term study (IDEAL) recommendations. Results: Median tumor size was 2.6 (range, 1.0-14.0) cm. In ten (77%) patients, the histological diagnosis corresponded to the imaging findings: five were malignancies, five benign lesions. Technical feasibility was 77% (10/13); in three patients biopsy results were inconclusive. The lesion size of these three patients was <2.5 cm. One patient developed a minor complication. Median follow-up was 16.0 (range, 6.4-19.8) months. Conclusions: 3D real-time fluoroscopy cone beam CT-guided biopsy of renal masses is feasible and safe. However, these first results suggest that diagnostic accuracy may be limited in patients with renal masses <2.5 cm.« less

  9. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  10. Real-time visual target tracking: two implementations of velocity-based smooth pursuit

    NASA Astrophysics Data System (ADS)

    Etienne-Cummings, Ralph; Longo, Paul; Van der Spiegel, Jan; Mueller, Paul

    1995-06-01

    Two systems for velocity-based visual target tracking are presented. The first two computational layers of both implementations are composed of VLSI photoreceptors (logarithmic compression) and edge detection (difference-of-Gaussians) arrays that mimic the outer-plexiform layer of mammalian retinas. The subsequent processing layers for measuring the target velocity and to realize smooth pursuit tracking are implemented in software and at the focal plane in the two versions, respectively. One implentation uses a hybrid of a PC and a silicon retina (39 X 38 pixels) operating at 333 frames/second. The software implementation of a real-time optical flow measurement algorithm is used to determine the target velocity, and a closed-loop control system zeroes the relative velocity of the target and retina. The second implementation is a single VLSI chip, which contains a linear array of photoreceptors, edge detectors and motion detectors at the focal plane. The closed-loop control system is also included on chip. This chip realizes all the computational properties of the hybrid system. The effects of background motion, target occlusion, and disappearance are studied as a function of retinal size and spatial distribution of the measured motion vectors (i.e. foveal/peripheral and diverging/converging measurement schemes). The hybrid system, which tested successfully, tracks targets moving as fast as 3 m/s at 1.3 meters from the camera and it can compensate for external arbitrary movements in its mounting platform. The single chip version, whose circuits tested successfully, can handle targets moving at 10 m/s.

  11. From Wheatstone to Cameron and beyond: overview in 3-D and 4-D imaging technology

    NASA Astrophysics Data System (ADS)

    Gilbreath, G. Charmaine

    2012-02-01

    This paper reviews three-dimensional (3-D) and four-dimensional (4-D) imaging technology, from Wheatstone through today, with some prognostications for near future applications. This field is rich in variety, subject specialty, and applications. A major trend, multi-view stereoscopy, is moving the field forward to real-time wide-angle 3-D reconstruction as breakthroughs in parallel processing and multi-processor computers enable very fast processing. Real-time holography meets 4-D imaging reconstruction at the goal of achieving real-time, interactive, 3-D imaging. Applications to telesurgery and telemedicine as well as to the needs of the defense and intelligence communities are also discussed.

  12. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  13. Technological advances in real-time tracking of cell death

    PubMed Central

    Skommer, Joanna; Darzynkiewicz, Zbigniew; Wlodkowic, Donald

    2010-01-01

    Cell population can be viewed as a quantum system, which like Schrödinger’s cat exists as a combination of survival- and death-allowing states. Tracking and understanding cell-to-cell variability in processes of high spatio-temporal complexity such as cell death is at the core of current systems biology approaches. As probabilistic modeling tools attempt to impute information inaccessible by current experimental approaches, advances in technologies for single-cell imaging and omics (proteomics, genomics, metabolomics) should go hand in hand with the computational efforts. Over the last few years we have made exciting technological advances that allow studies of cell death dynamically in real-time and with the unprecedented accuracy. These approaches are based on innovative fluorescent assays and recombinant proteins, bioelectrical properties of cells, and more recently also on state-of-the-art optical spectroscopy. Here, we review current status of the most innovative analytical technologies for dynamic tracking of cell death, and address the interdisciplinary promises and future challenges of these methods. PMID:20519963

  14. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rilling, M; Centre de recherche sur le cancer, Universite Laval, Quebec City, QC; Radiation oncology department, CHU de Quebec, Quebec City, QC

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm{sup 3} plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillatormore » centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D{sub 50} of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second

  15. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation.

    PubMed

    Arujuna, Aruna V; Housden, R James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D; Razavi, Reza; Rhode, Kawal S

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures.

  16. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    PubMed

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better

  17. Fast human pose estimation using 3D Zernike descriptors

    NASA Astrophysics Data System (ADS)

    Berjón, Daniel; Morán, Francisco

    2012-03-01

    Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.

  18. Non-iterative double-frame 2D/3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Fuchs, Thomas; Hain, Rainer; Kähler, Christian J.

    2017-09-01

    In recent years, the detection of individual particle images and their tracking over time to determine the local flow velocity has become quite popular for planar and volumetric measurements. Particle tracking velocimetry has strong advantages compared to the statistical analysis of an ensemble of particle images by means of cross-correlation approaches, such as particle image velocimetry. Tracking individual particles does not suffer from spatial averaging and therefore bias errors can be avoided. Furthermore, the spatial resolution can be increased up to the sub-pixel level for mean fields. A maximization of the spatial resolution for instantaneous measurements requires high seeding concentrations. However, it is still challenging to track particles at high seeding concentrations, if no time series is available. Tracking methods used under these conditions are typically very complex iterative algorithms, which require expert knowledge due to the large number of adjustable parameters. To overcome these drawbacks, a new non-iterative tracking approach is introduced in this letter, which automatically analyzes the motion of the neighboring particles without requiring to specify any parameters, except for the displacement limits. This makes the algorithm very user friendly and also offers unexperienced users to use and implement particle tracking. In addition, the algorithm enables measurements of high speed flows using standard double-pulse equipment and estimates the flow velocity reliably even at large particle image densities.

  19. Pulsed cavitational ultrasound for non-invasive chordal cutting guided by real-time 3D echocardiography.

    PubMed

    Villemain, Olivier; Kwiecinski, Wojciech; Bel, Alain; Robin, Justine; Bruneval, Patrick; Arnal, Bastien; Tanter, Mickael; Pernot, Mathieu; Messas, Emmanuel

    2016-10-01

    Basal chordae surgical section has been shown to be effective in reducing ischaemic mitral regurgitation (IMR). Achieving this section by non-invasive mean can considerably decrease the morbidity of this intervention on already infarcted myocardium. We investigated in vitro and in vivo the feasibility and safety of pulsed cavitational focused ultrasound (histotripsy) for non-invasive chordal cutting guided by real-time 3D echocardiography. Experiments were performed on 12 sheep hearts, 5 in vitro on explanted sheep hearts and 7 in vivo on beating sheep hearts. In vitro, the mitral valve (MV) apparatus including basal and marginal chordae was removed and fixed on a holder in a water tank. High-intensity ultrasound pulses were emitted from the therapeutic device (1-MHz focused transducer, pulses of 8 µs duration, peak negative pressure of 17 MPa, repetition frequency of 100 Hz), placed at a distance of 64 mm under 3D echocardiography guidance. In vivo, after sternotomy, the same therapeutic device was applied on the beating heart. We analysed MV coaptation and chordae by real-time 3D echocardiography before and after basal chordal cutting. After sacrifice, the MV apparatus were harvested for anatomical and histological post-mortem explorations to confirm the section of the chordae. In vitro, all chordae were completely cut after a mean procedure duration of 5.5 ± 2.5 min. The procedure duration was found to increase linearly with the chordae diameter. In vivo, the central basal chordae of the anterior leaflet were completely cut. The mean procedure duration was 20 ± 9 min (min = 14, max = 26). The sectioned chordae was visible on echocardiography, and MV coaptation remained normal with no significant mitral regurgitation. Anatomical and histological post-mortem explorations of the hearts confirmed the section of the chordae. Histotripsy guided by 3D echo achieved successfully to cut MV chordae in vitro and in vivo in beating heart. We hope that this technique will

  20. Three-Station Three-dimensional Bolus-Chase MR Angiography with Real-time Fluoroscopic Tracking

    PubMed Central

    Johnson, Casey P.; Weavers, Paul T.; Borisch, Eric A.; Grimm, Roger C.; Hulshizer, Thomas C.; LaPlante, Christine C.; Rossman, Phillip J.; Glockner, James F.; Young, Phillip M.

    2014-01-01

    Purpose To determine the feasibility of using real-time fluoroscopic tracking for bolus-chase magnetic resonance (MR) angiography of peripheral vasculature to image three stations from the aortoiliac bifurcation to the pedal arteries. Materials and Methods This prospective study was institutional review board approved and HIPAA compliant. Eight healthy volunteers (three men; mean age, 48 years; age range, 30–81 years) and 13 patients suspected of having peripheral arterial disease (five men; mean age, 67 years; age range, 47–81 years) were enrolled and provided informed consent. All subjects were imaged with the fluoroscopic tracking MR angiographic protocol. Ten patients also underwent a clinical computed tomographic (CT) angiographic runoff examination. Two readers scored the MR angiographic studies for vessel signal intensity and sharpness and presence of confounding artifacts and venous contamination at 35 arterial segments. Mean aggregate scores were assessed. The paired MR angiographic and CT angiographic studies also were scored for visualization of disease, reader confidence, and overall diagnostic quality and were compared by using a Wilcoxon signed rank test. Results Real-time fluoroscopic tracking performed well technically in all studies. Vessel segments were scored good to excellent in all but the following categories: For vessel signal intensity and sharpness, the abdominal aorta, iliac arteries, distal plantar arteries, and plantar arch were scored as fair to good; and for presence of confounding artifacts, the abdominal aorta and iliac arteries were scored as fair. The MR angiograms and CT angiograms did not differ significantly in any scoring category (reader 1: P = .50, .39, and .39; reader 2: P = .41, .61, and .33, respectively). CT scores were substantially better in 20% (four of 20) and 25% (five of 20) of the pooled evaluations for the visualization of disease and overall image quality categories, respectively, versus 5% (one of 20) for MR

  1. Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor.

    PubMed

    You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen

    2017-03-31

    The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face.

  2. Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor

    PubMed Central

    You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen

    2017-01-01

    The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face. PMID:28362349

  3. Development of the compact proton beam therapy system dedicated to spot scanning with real-time tumor-tracking technology

    NASA Astrophysics Data System (ADS)

    Umezawa, Masumi; Fujimoto, Rintaro; Umekawa, Tooru; Fujii, Yuusuke; Takayanagi, Taisuke; Ebina, Futaro; Aoki, Takamichi; Nagamine, Yoshihiko; Matsuda, Koji; Hiramoto, Kazuo; Matsuura, Taeko; Miyamoto, Naoki; Nihongi, Hideaki; Umegaki, Kikuo; Shirato, Hiroki

    2013-04-01

    Hokkaido University and Hitachi Ltd. have started joint development of the Gated Spot Scanning Proton Therapy with Real-Time Tumor-Tracking System by integrating real-time tumor tracking technology (RTRT) and the proton therapy system dedicated to discrete spot scanning techniques under the "Funding Program for World-Leading Innovative R&D on Science and Technology (FIRST Program)". In this development, we have designed the synchrotron-based accelerator system by using the advantages of the spot scanning technique in order to realize a more compact and lower cost proton therapy system than the conventional system. In the gated irradiation, we have focused on the issues to maximize irradiation efficiency and minimize the dose errors caused by organ motion. In order to understand the interplay effect between scanning beam delivery and target motion, we conducted a simulation study. The newly designed system consists of the synchrotron, beam transport system, one compact rotating gantry treatment room with robotic couch, and one experimental room for future research. To improve the irradiation efficiency, the new control function which enables multiple gated irradiations per synchrotron cycle has been applied and its efficacy was confirmed by the irradiation time estimation. As for the interplay effect, we confirmed that the selection of a strict gating width and scan direction enables formation of the uniform dose distribution.

  4. Real-time skeleton tracking for embedded systems

    NASA Astrophysics Data System (ADS)

    Coleca, Foti; Klement, Sascha; Martinetz, Thomas; Barth, Erhardt

    2013-03-01

    Touch-free gesture technology is beginning to become more popular with consumers and may have a significant future impact on interfaces for digital photography. However, almost every commercial software framework for gesture and pose detection is aimed at either desktop PCs or high-powered GPUs, making mobile implementations for gesture recognition an attractive area for research and development. In this paper we present an algorithm for hand skeleton tracking and gesture recognition that runs on an ARM-based platform (Pandaboard ES, OMAP 4460 architecture). The algorithm uses self-organizing maps to fit a given topology (skeleton) into a 3D point cloud. This is a novel way of approaching the problem of pose recognition as it does not employ complex optimization techniques or data-based learning. After an initial background segmentation step, the algorithm is ran in parallel with heuristics, which detect and correct artifacts arising from insufficient or erroneous input data. We then optimize the algorithm for the ARM platform using fixed-point computation and the NEON SIMD architecture the OMAP4460 provides. We tested the algorithm with two different depth-sensing devices (Microsoft Kinect, PMD Camboard). For both input devices we were able to accurately track the skeleton at the native framerate of the cameras.

  5. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    PubMed Central

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  6. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    PubMed

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  7. Real-time probabilistic covariance tracking with efficient model update.

    PubMed

    Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li

    2012-05-01

    The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.

  8. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    NASA Astrophysics Data System (ADS)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  9. PROMO – Real-time Prospective Motion Correction in MRI using Image-based Tracking

    PubMed Central

    White, Nathan; Roddey, Cooper; Shankaranarayanan, Ajit; Han, Eric; Rettmann, Dan; Santos, Juan; Kuperman, Josh; Dale, Anders

    2010-01-01

    Artifacts caused by patient motion during scanning remain a serious problem in most MRI applications. The prospective motion correction technique attempts to address this problem at its source by keeping the measurement coordinate system fixed with respect to the patient throughout the entire scan process. In this study, a new image-based approach for prospective motion correction is described, which utilizes three orthogonal 2D spiral navigator acquisitions (SP-Navs) along with a flexible image-based tracking method based on the Extended Kalman Filter (EKF) algorithm for online motion measurement. The SP-Nav/EKF framework offers the advantages of image-domain tracking within patient-specific regions-of-interest and reduced sensitivity to off-resonance-induced corruption of rigid-body motion estimates. The performance of the method was tested using offline computer simulations and online in vivo head motion experiments. In vivo validation results covering a broad range of staged head motions indicate a steady-state error of the SP-Nav/EKF motion estimates of less than 10 % of the motion magnitude, even for large compound motions that included rotations over 15 degrees. A preliminary in vivo application in 3D inversion recovery spoiled gradient echo (IR-SPGR) and 3D fast spin echo (FSE) sequences demonstrates the effectiveness of the SP-Nav/EKF framework for correcting 3D rigid-body head motion artifacts prospectively in high-resolution 3D MRI scans. PMID:20027635

  10. A real-time optical tracking and measurement processing system for flying targets.

    PubMed

    Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu

    2014-01-01

    Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control.

  11. Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation [Invited].

    PubMed

    Yoon, Ki-Hyuk; Kang, Min-Koo; Lee, Hwasun; Kim, Sung-Kyu

    2018-01-01

    We study optical technologies for viewer-tracked autostereoscopic 3D display (VTA3D), which provides improved 3D image quality and extended viewing range. In particular, we utilize a technique-the so-called dynamic fusion of viewing zone (DFVZ)-for each 3D optical line to realize image quality equivalent to that achievable at optimal viewing distance, even when a viewer is moving in a depth direction. In addition, we examine quantitative properties of viewing zones provided by the VTA3D system that adopted DFVZ, revealing that the optimal viewing zone can be formed at viewer position. Last, we show that the comfort zone is extended due to DFVZ. This is demonstrated by a viewer's subjective evaluation of the 3D display system that employs both multiview autostereoscopic 3D display and DFVZ.

  12. High-precision real-time 3D shape measurement based on a quad-camera system

    NASA Astrophysics Data System (ADS)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  13. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    PubMed

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors.

  14. 3D Microfluidic model for evaluating immunotherapy efficacy by tracking dendritic cell behaviour toward tumor cells.

    PubMed

    Parlato, Stefania; De Ninno, Adele; Molfetta, Rosa; Toschi, Elena; Salerno, Debora; Mencattini, Arianna; Romagnoli, Giulia; Fragale, Alessandra; Roccazzello, Lorenzo; Buoncervello, Maria; Canini, Irene; Bentivegna, Enrico; Falchi, Mario; Bertani, Francesca Romana; Gerardino, Annamaria; Martinelli, Eugenio; Natale, Corrado; Paolini, Rossella; Businaro, Luca; Gabriele, Lucia

    2017-04-24

    Immunotherapy efficacy relies on the crosstalk within the tumor microenvironment between cancer and dendritic cells (DCs) resulting in the induction of a potent and effective antitumor response. DCs have the specific role of recognizing cancer cells, taking up tumor antigens (Ags) and then migrating to lymph nodes for Ag (cross)-presentation to naïve T cells. Interferon-α-conditioned DCs (IFN-DCs) exhibit marked phagocytic activity and the special ability of inducing Ag-specific T-cell response. Here, we have developed a novel microfluidic platform recreating tightly interconnected cancer and immune systems with specific 3D environmental properties, for tracking human DC behaviour toward tumor cells. By combining our microfluidic platform with advanced microscopy and a revised cell tracking analysis algorithm, it was possible to evaluate the guided efficient motion of IFN-DCs toward drug-treated cancer cells and the succeeding phagocytosis events. Overall, this platform allowed the dissection of IFN-DC-cancer cell interactions within 3D tumor spaces, with the discovery of major underlying factors such as CXCR4 involvement and underscored its potential as an innovative tool to assess the efficacy of immunotherapeutic approaches.

  15. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  16. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  17. Tracking of 25-hydroxyvitamin D status during pregnancy: the importance of vitamin D supplementation.

    PubMed

    Moon, Rebecca J; Crozier, Sarah R; Dennison, Elaine M; Davies, Justin H; Robinson, Sian M; Inskip, Hazel M; Godfrey, Keith M; Cooper, Cyrus; Harvey, Nicholas C

    2015-11-01

    The role of maternal 25-hydroxyvitamin D [25(OH)D] in fetal development is uncertain, and findings of observational studies have been inconsistent. Most studies have assessed 25(OH)D only one time during pregnancy, but to our knowledge, the tracking of an individual's 25(OH)D during pregnancy has not been assessed previously. We determined the tracking of serum 25(OH)D from early to late pregnancy and factors that influence this. The Southampton Women's Survey is a prospective mother-offspring birth-cohort study. Lifestyle, diet, and 25(OH)D status were assessed at 11 and 34 wk of gestation. A Fourier transformation was used to model the seasonal variation in 25(OH)D for early and late pregnancy separately, and the difference between the measured and seasonally modeled 25(OH)D was calculated to generate a season-corrected 25(OH)D. Tracking was assessed with the use of the Pearson correlation coefficient, and multivariate linear regression was used to determine factors associated with the change in season-corrected 25(OH)D. A total of 1753 women had 25(OH)D measured in both early and late pregnancy. There was a moderate correlation between season-corrected 25(OH)D measurements at 11 and 34 wk of gestation (r = 0.53, P < 0.0001; n = 1753). Vitamin D supplementation was the strongest predictor of tracking; in comparison with women who never used supplements, the discontinuation of supplementation after 11 wk was associated with a reduction in season-corrected 25(OH)D (β = -7.3 nmol/L; P < 0.001), whereas the commencement (β = 12.6 nmol/L; P < 0.001) or continuation (β = 6.6 nmol/L; P < 0.001) of supplementation was associated with increases in season-corrected 25(OH)D. Higher pregnancy weight gain was associated with a reduction in season-corrected 25(OH)D (β = -0.4 nmol · L(-1) · kg(-1); P = 0.015), whereas greater physical activity (β = 0.4 nmol/L per h/wk; P = 0.011) was associated with increases. There is a moderate tracking of 25(OH)D status through

  18. Foliage penetration by using 4-D point cloud data

    NASA Astrophysics Data System (ADS)

    Méndez Rodríguez, Javier; Sánchez-Reyes, Pedro J.; Cruz-Rivera, Sol M.

    2012-06-01

    Real-time awareness and rapid target detection are critical for the success of military missions. New technologies capable of detecting targets concealed in forest areas are needed in order to track and identify possible threats. Currently, LAser Detection And Ranging (LADAR) systems are capable of detecting obscured targets; however, tracking capabilities are severely limited. Now, a new LADAR-derived technology is under development to generate 4-D datasets (3-D video in a point cloud format). As such, there is a new need for algorithms that are able to process data in real time. We propose an algorithm capable of removing vegetation and other objects that may obfuscate concealed targets in a real 3-D environment. The algorithm is based on wavelets and can be used as a pre-processing step in a target recognition algorithm. Applications of the algorithm in a real-time 3-D system could help make pilots aware of high risk hidden targets such as tanks and weapons, among others. We will be using a 4-D simulated point cloud data to demonstrate the capabilities of our algorithm.

  19. Mechanical vibration compensation method for 3D+t multi-particle tracking in microscopic volumes.

    PubMed

    Pimentel, A; Corkidi, G

    2009-01-01

    The acquisition and analysis of data in microscopic systems with spatiotemporal evolution is a very relevant topic. In this work, we describe a method to optimize an experimental setup for acquiring and processing spatiotemporal (3D+t) data in microscopic systems. The method is applied to a three-dimensional multi-tracking and analysis system of free-swimming sperm trajectories previously developed. The experimental set uses a piezoelectric device making oscillate a large focal-distance objective mounted on an inverted microscope (over its optical axis) to acquire stacks of images at a high frame rate over a depth on the order of 250 microns. A problem arise when the piezoelectric device oscillates, in such a way that a vibration is transmitted to the whole microscope, inducing undesirable 3D vibrations to the whole set. For this reason, as a first step, the biological preparation was isolated from the body of the microscope to avoid modifying the free swimming pattern of the microorganism due to the transmission of these vibrations. Nevertheless, as the image capturing device is mechanically attached to the "vibrating" microscope, the resulting acquired data are contaminated with an undesirable 3D movement that biases the original trajectory of these high speed moving cells. The proposed optimization method determines the functional form of these 3D oscillations to neutralize them from the original acquired data set. Given the spatial scale of the system, the added correction increases significantly the data accuracy. The optimized system may be very useful in a wide variety of 3D+t applications using moving optical devices.

  20. SU-G-JeP3-10: Update On a Real-Time Treatment Guidance System Using An IR Navigation System for Pleural PDT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, M; Penjweini, R; Zhu, T

    Purpose: Photodynamic therapy (PDT) is used in conjunction with surgical debulking of tumorous tissue during treatment for pleural mesothelioma. One of the key components of effective PDT is uniform light distribution. Currently, light is monitored with 8 isotropic light detectors that are placed at specific locations inside the pleural cavity. A tracking system with real-time feedback software can be utilized to improve the uniformity of light in addition to the existing detectors. Methods: An infrared (IR) tracking camera is used to monitor the movement of the light source. The same system determines the pleural geometry of the treatment area. Softwaremore » upgrades allow visualization of the pleural cavity as a two-dimensional volume. The treatment delivery wand was upgraded for ease of light delivery while incorporating the IR system. Isotropic detector locations are also displayed. Data from the tracking system is used to calculate the light fluence rate delivered. This data is also compared with in vivo data collected via the isotropic detectors. Furthermore, treatment volume information will be used to form light dose volume histograms of the pleural cavity. Results: In a phantom study, the light distribution was improved by using real-time guidance compared to the distribution when using detectors without guidance. With the tracking system, 2D data can be collected regarding light fluence rather than just the 8 discrete locations inside the pleural cavity. Light fluence distribution on the entire cavity can be calculated at every time in the treatment. Conclusion: The IR camera has been used successfully during pleural PDT patient treatment to track the motion of the light source and provide real-time display of 2D light fluence. It is possible to use the feedback system to deliver a more uniform dose of light throughout the pleural cavity.« less

  1. Uncovering cancer cell behavioral phenotype in 3-D in vitro metastatic landscapes

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Sun, Bo; Duclos, Guillaume; Kam, Yoonseok; Gatenby, Robert; Stone, Howard; Austin, Robert

    2012-02-01

    One well-known fact is that cancer cell genetics determines cell metastatic potentials. However, from a physics point of view, genetics as cell properties cannot directly act on metastasis. An agent is needed to unscramble the genetics first before generating dynamics for metastasis. Exactly this agent is cell behavioral phenotype, which is rarely studied due to the difficulties of real-time cell tracking in in vivo tissue. Here we have successfully constructed a micro in vitro environment with collagen based Extracellular Matrix (ECM) structures for cell 3-D metastasis. With stable nutrition (glucose) gradient inside, breast cancer cell MDA-MB-231 is able to invade inside the collagen from the nutrition poor site towards the nutrition rich site. Continuous confocal microscopy captures images of the cells every 12 hours and tracks their positions in 3-D space. The micro fluorescent beads pre-mixed inside the ECM demonstrate that invasive cells have altered the structures through mechanics. With the observation and the analysis of cell collective behaviors, we argue that game theory may exist between the pioneering cells and their followers in the metastatic cell group. The cell collaboration may explain the high efficiency of metastasis.

  2. A Real-Time Optical Tracking and Measurement Processing System for Flying Targets

    PubMed Central

    Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu

    2014-01-01

    Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control. PMID:24987748

  3. Real-time circumferential mapping catheter tracking for motion compensation in atrial fibrillation ablation procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Bourier, Felix; Wimmer, Andreas; Koch, Martin; Kiraly, Atilla; Liao, Rui; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert

    2012-02-01

    Atrial fibrillation (AFib) has been identified as a major cause of stroke. Radiofrequency catheter ablation has become an increasingly important treatment option, especially when drug therapy fails. Navigation under X-ray can be enhanced by using augmented fluoroscopy. It renders overlay images from pre-operative 3-D data sets which are then fused with X-ray images to provide more details about the underlying soft-tissue anatomy. Unfortunately, these fluoroscopic overlay images are compromised by respiratory and cardiac motion. Various methods to deal with motion have been proposed. To meet clinical demands, they have to be fast. Methods providing a processing frame rate of 3 frames-per-second (fps) are considered suitable for interventional electrophysiology catheter procedures if an acquisition frame rate of 2 fps is used. Unfortunately, when working at a processing rate of 3 fps, the delay until the actual motion compensated image can be displayed is about 300 ms. More recent algorithms can achieve frame rates of up to 20 fps, which reduces the lag to 50 ms. By using a novel approach involving a 3-D catheter model, catheter segmentation and a distance transform, we can speed up motion compensation to 25 fps which results in a display delay of only 40 ms on a standard workstation for medical applications. Our method uses a constrained 2-D/3-D registration to perform catheter tracking, and it obtained a 2-D tracking error of 0.61 mm.

  4. Standardized 2D ultrasound versus 3D/4D ultrasound and image fusion for measurement of aortic aneurysm diameter in follow-up after EVAR.

    PubMed

    Pfister, Karin; Schierling, Wilma; Jung, Ernst Michael; Apfelbeck, Hanna; Hennersperger, Christoph; Kasprzak, Piotr M

    2016-01-01

    To compare standardised 2D ultrasound (US) to the novel ultrasonographic imaging techniques 3D/4D US and image fusion (combined real-time display of B mode and CT scan) for routine measurement of aortic diameter in follow-up after endovascular aortic aneurysm repair (EVAR). 300 measurements were performed on 20 patients after EVAR by one experienced sonographer (3rd degree of the German society of ultrasound (DEGUM)) with a high-end ultrasound machine and a convex probe (1-5 MHz). An internally standardized scanning protocol of the aortic aneurysm diameter in B mode used a so called leading-edge method. In summary, five different US methods (2D, 3D free-hand, magnetic field tracked 3D - Curefab™, 4D volume sweep, image fusion), each including contrast-enhanced ultrasound (CEUS), were used for measurement of the maximum aortic aneurysm diameter. Standardized 2D sonography was the defined reference standard for statistical analysis. CEUS was used for endoleak detection. Technical success was 100%. In augmented transverse imaging the mean aortic anteroposterior (AP) diameter was 4.0±1.3 cm for 2D US, 4.0±1.2 cm for 3D Curefab™, and 3.9±1.3 cm for 4D US and 4.0±1.2 for image fusion. The mean differences were below 1 mm (0.2-0.9 mm). Concerning estimation of aneurysm growth, agreement was found between 2D, 3D and 4D US in 19 of the 20 patients (95%). Definitive decision could always be made by image fusion. CEUS was combined with all methods and detected two out of the 20 patients (10%) with an endoleak type II. In one case, endoleak feeding arteries remained unclear with 2D CEUS but could be clearly localized by 3D CEUS and image fusion. Standardized 2D US allows adequate routine follow-up of maximum aortic aneurysm diameter after EVAR. Image Fusion enables a definitive statement about aneurysm growth without the need for new CT imaging by combining the postoperative CT scan with real-time B mode in a dual image display. 3D/4D CEUS and image fusion

  5. 4D tumor centroid tracking using orthogonal 2D dynamic MRI: Implications for radiotherapy planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tryggestad, Erik; Flammang, Aaron; Shea, Steven M.

    2013-09-15

    Purpose: Current pretreatment, 4D imaging techniques are suboptimal in that they sample breathing motion over a very limited “snapshot” in time. Heretofore, long-duration, 4D motion characterization for radiotherapy planning, margin optimization, and validation have been impractical for safety reasons, requiring invasive markers imaged under x-ray fluoroscopy. To characterize 3D tumor motion and associated variability over durations more consistent with treatments, the authors have developed a practical dynamic MRI (dMRI) technique employing two orthogonal planes acquired in a continuous, interleaved fashion.Methods: 2D balanced steady-state free precession MRI was acquired continuously over 9–14 min at approximately 4 Hz in three healthy volunteersmore » using a commercial 1.5 T system; alternating orthogonal imaging planes (sagittal, coronal, sagittal, etc.) were employed. The 2D in-plane pixel resolution was 2 × 2 mm{sup 2} with a 5 mm slice profile. Simultaneous with image acquisition, the authors monitored a 1D surrogate respiratory signal using a device available with the MRI system. 2D template matching-based anatomic feature registration, or tracking, was performed independently in each orientation. 4D feature tracking at the raw frame rate was derived using spline interpolation.Results: Tracking vascular features in the lung for two volunteers and pancreatic features in one volunteer, the authors have successfully demonstrated this method. Registration error, defined here as the difference between the sagittal and coronal tracking result in the SI direction, ranged from 0.7 to 1.6 mm (1σ) which was less than the acquired image resolution. Although the healthy volunteers were instructed to relax and breathe normally, significantly variable respiration was observed. To demonstrate potential applications of this technique, the authors subsequently explored the intrafraction stability of hypothetical tumoral internal target volumes and 3D spatial

  6. Real-time tracking and virtual endoscopy in cone-beam CT-guided surgery of the sinuses and skull base in a cadaver model.

    PubMed

    Prisman, Eitan; Daly, Michael J; Chan, Harley; Siewerdsen, Jeffrey H; Vescan, Allan; Irish, Jonathan C

    2011-01-01

    Custom software was developed to integrate intraoperative cone-beam computed tomography (CBCT) images with endoscopic video for surgical navigation and guidance. A cadaveric head was used to assess the accuracy and potential clinical utility of the following functionality: (1) real-time tracking of the endoscope in intraoperative 3-dimensional (3D) CBCT; (2) projecting an orthogonal reconstructed CBCT image, at or beyond the endoscope, which is parallel to the tip of the endoscope corresponding to the surgical plane; (3) virtual reality fusion of endoscopic video and 3D CBCT surface rendering; and (4) overlay of preoperatively defined contours of anatomical structures of interest. Anatomical landmarks were contoured in CBCT of a cadaveric head. An experienced endoscopic surgeon was oriented to the software and asked to rate the utility of the navigation software in carrying out predefined surgical tasks. Utility was evaluated using a rating scale for: (1) safely completing the task; and (2) potential for surgical training. Surgical tasks included: (1) uncinectomy; (2) ethmoidectomy; (3) sphenoidectomy/pituitary resection; and (4) clival resection. CBCT images were updated following each ablative task. As a teaching tool, the software was evaluated as "very useful" for all surgical tasks. Regarding safety and task completion, the software was evaluated as "no advantage" for task (1), "minimal" for task (2), and "very useful" for tasks (3) and (4). Landmark identification for structures behind bone was "very useful" for both categories. The software increased surgical confidence in safely completing challenging ablative tasks by presenting real-time image guidance for highly complex ablative procedures. In addition, such technology offers a valuable teaching aid to surgeons in training. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  7. Three-Dimensional High-Resolution Optical/X-Ray Stereoscopic Tracking Velocimetry

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung S.; Ramachandran, Narayanan

    2004-01-01

    Measurement of three-dimensional (3-D) three-component velocity fields is of great importance in a variety of research and industrial applications for understanding materials processing, fluid physics, and strain/displacement measurements. The 3-D experiments in these fields most likely inhibit the use of conventional techniques, which are based only on planar and optically-transparent-field observation. Here, we briefly review the current status of 3-D diagnostics for motion/velocity detection, for both optical and x-ray systems. As an initial step for providing 3-D capabilities, we nave developed stereoscopic tracking velocimetry (STV) to measure 3-D flow/deformation through optical observation. The STV is advantageous in system simplicity, for continually observing 3- D phenomena in near real-time. In an effort to enhance the data processing through automation and to avoid the confusion in tracking numerous markers or particles, artificial neural networks are employed to incorporate human intelligence. Our initial optical investigations have proven the STV to be a very viable candidate for reliably measuring 3-D flow motions. With previous activities are focused on improving the processing efficiency, overall accuracy, and automation based on the optical system, the current efforts is directed to the concurrent expansion to the x-ray system for broader experimental applications.

  8. Three-Dimensional High-Resolution Optical/X-Ray Stereoscopic Tracking Velocimetry

    NASA Technical Reports Server (NTRS)

    Cha, Soyoung S.; Ramachandran, Naryanan

    2005-01-01

    Measurement of three-dimensional (3-D) three-component velocity fields is of great importance in a variety of research and industrial applications for understanding materials processing, fluid physics, and strain/displacement measurements. The 3-D experiments in these fields most likely inhibit the use of conventional techniques, which are based only on planar and optically-transparent-field observation. Here, we briefly review the current status of 3-D diagnostics for motion/velocity detection, for both optical and x-ray systems. As an initial step for providing 3-D capabilities, we have developed stereoscopic tracking velocimetry (STV) to measure 3-D flow/deformation through optical observation. The STV is advantageous in system simplicity, for continually observing 3-D phenomena in near real-time. In an effort to enhance the data processing through automation and to avoid the confusion in tracking numerous markers or particles, artificial neural networks are employed to incorporate human intelligence. Our initial optical investigations have proven the STV to be a very viable candidate for reliably measuring 3-D flow motions. With previous activities focused on improving the processing efficiency, overall accuracy, and automation based on the optical system, the current efforts is directed to the concurrent expansion to the x-ray system for broader experimental applications.

  9. 3D laser traking of a particle in 3DFM

    NASA Astrophysics Data System (ADS)

    Desai, Kalpit; Welch, Gregory; Bishop, Gary; Taylor, Russell; Superfine, Richard

    2003-11-01

    The principal goal of 3D tracking in our home-built 3D Magnetic Force Microscope is to monitor movement of the particle with respect to laser beam waist and keep the particle at the center of laser beam. The sensory element is a Quadrant Photo Diode (QPD) which captures scattering of light caused by particle motion with bandwidth up to 40 KHz. XYZ translation stage is the driver element which moves particle back in the center of the laser with accuracy of couple of nanometers and with bandwidth up to 300 Hz. Since our particles vary in size, composition and shape, instead of using a priori model we use standard system identification techniques to have optimal approximation to the relationship between particle motion and QPD response. We have developed position feedback control system software that is capable of 3-dimensional tracking of beads that are attached to cilia on living cells which are beating at up to 15Hz. We have also modeled the control system of instrument to simulate performance of 3D particle tracking for different experimental conditions. Given operational level of nanometers, noise poses a great challenge for the tracking system. We propose to use stochastic control theory approaches to increase robustness of tracking.

  10. Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I

    NASA Astrophysics Data System (ADS)

    Gonthier, David L.; Veron, Harry

    1998-04-01

    A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.

  11. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.

    PubMed

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-09-01

    To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.

  12. Tracking initially unresolved thrusting objects in 3D using a single stationary optical sensor

    NASA Astrophysics Data System (ADS)

    Lu, Qin; Bar-Shalom, Yaakov; Willett, Peter; Granström, Karl; Ben-Dov, R.; Milgrom, B.

    2017-05-01

    This paper considers the problem of estimating the 3D states of a salvo of thrusting/ballistic endo-atmospheric objects using 2D Cartesian measurements from the focal plane array (FPA) of a single fixed optical sensor. Since the initial separations in the FPA are smaller than the resolution of the sensor, this results in merged measurements in the FPA, compounding the usual false-alarm and missed-detection uncertainty. We present a two-step methodology. First, we assume a Wiener process acceleration (WPA) model for the motion of the images of the projectiles in the optical sensor's FPA. We model the merged measurements with increased variance, and thence employ a multi-Bernoulli (MB) filter using the 2D measurements in the FPA. Second, using the set of associated measurements for each confirmed MB track, we formulate a parameter estimation problem, whose maximum likelihood estimate can be obtained via numerical search and can be used for impact point prediction. Simulation results illustrate the performance of the proposed method.

  13. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    PubMed

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  14. Using LabView for real-time monitoring and tracking of multiple biological objects

    NASA Astrophysics Data System (ADS)

    Nikolskyy, Aleksandr I.; Krasilenko, Vladimir G.; Bilynsky, Yosyp Y.; Starovier, Anzhelika

    2017-04-01

    Today real-time studying and tracking of movement dynamics of various biological objects is important and widely researched. Features of objects, conditions of their visualization and model parameters strongly influence the choice of optimal methods and algorithms for a specific task. Therefore, to automate the processes of adaptation of recognition tracking algorithms, several Labview project trackers are considered in the article. Projects allow changing templates for training and retraining the system quickly. They adapt to the speed of objects and statistical characteristics of noise in images. New functions of comparison of images or their features, descriptors and pre-processing methods will be discussed. The experiments carried out to test the trackers on real video files will be presented and analyzed.

  15. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR

    PubMed Central

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-01-01

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868

  16. Motion field estimation for a dynamic scene using a 3D LiDAR.

    PubMed

    Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington

    2014-09-09

    This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.

  17. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  18. Simultaneous tumor and surrogate motion tracking with dynamic MRI for radiation therapy planning

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Farah, Rana; Shea, Steven M.; Tryggestad, Erik; Hales, Russell; Lee, Junghoon

    2018-01-01

    Respiration-induced tumor motion is a major obstacle for achieving high-precision radiotherapy of cancers in the thoracic and abdominal regions. Surrogate-based estimation and tracking methods are commonly used in radiotherapy, but with limited understanding of quantified correlation to tumor motion. In this study, we propose a method to simultaneously track the lung tumor and external surrogates to evaluate their spatial correlation in a quantitative way using dynamic MRI, which allows real-time acquisition without ionizing radiation exposure. To capture the lung and whole tumor, four MRI-compatible fiducials are placed on the patient’s chest and upper abdomen. Two different types of acquisitions are performed in the sagittal orientation including multi-slice 2D cine MRIs to reconstruct 4D-MRI and two-slice 2D cine MRIs to simultaneously track the tumor and fiducials. A phase-binned 4D-MRI is first reconstructed from multi-slice MR images using body area as a respiratory surrogate and groupwise registration. The 4D-MRI provides 3D template volumes for different breathing phases. 3D tumor position is calculated by 3D-2D template matching in which 3D tumor templates in the 4D-MRI reconstruction and the 2D cine MRIs from the two-slice tracking dataset are registered. 3D trajectories of the external surrogates are derived via matching a 3D geometrical model of the fiducials to their segmentations on the 2D cine MRIs. We tested our method on ten lung cancer patients. Using a correlation analysis, the 3D tumor trajectory demonstrates a noticeable phase mismatch and significant cycle-to-cycle motion variation, while the external surrogate was not sensitive enough to capture such variations. Additionally, there was significant phase mismatch between surrogate signals obtained from the fiducials at different locations.

  19. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  20. Web GIS in practice V: 3-D interactive and real-time mapping in Second Life

    PubMed Central

    Boulos, Maged N Kamel; Burden, David

    2007-01-01

    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275

  1. Laetoli's lost tracks: 3D generated mean shape and missing footprints.

    PubMed

    Bennett, M R; Reynolds, S C; Morse, S A; Budka, M

    2016-02-23

    The Laetoli site (Tanzania) contains the oldest known hominin footprints, and their interpretation remains open to debate, despite over 35 years of research. The two hominin trackways present are parallel to one another, one of which is a composite formed by at least two individuals walking in single file. Most researchers have focused on the single, clearly discernible G1 trackway while the G2/3 trackway has been largely dismissed due to its composite nature. Here we report the use of a new technique that allows us to decouple the G2 and G3 tracks for the first time. In so doing we are able to quantify the mean footprint topology of the G3 trackway and render it useable for subsequent data analyses. By restoring the effectively 'lost' G3 track, we have doubled the available data on some of the rarest traces directly associated with our Pliocene ancestors.

  2. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    PubMed Central

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  3. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    PubMed

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  4. Positional Awareness Map 3D (PAM3D)

    NASA Technical Reports Server (NTRS)

    Hoffman, Monica; Allen, Earl L.; Yount, John W.; Norcross, April Louise

    2012-01-01

    The Western Aeronautical Test Range of the National Aeronautics and Space Administration s Dryden Flight Research Center needed to address the aging software and hardware of its current situational awareness display application, the Global Real-Time Interactive Map (GRIM). GRIM was initially developed in the late 1980s and executes on older PC architectures using a Linux operating system that is no longer supported. Additionally, the software is difficult to maintain due to its complexity and loss of developer knowledge. It was decided that a replacement application must be developed or acquired in the near future. The replacement must provide the functionality of the original system, the ability to monitor test flight vehicles in real-time, and add improvements such as high resolution imagery and true 3-dimensional capability. This paper will discuss the process of determining the best approach to replace GRIM, and the functionality and capabilities of the first release of the Positional Awareness Map 3D.

  5. The development of a 4D treatment planning methodology to simulate the tracking of central lung tumors in an MRI-linac.

    PubMed

    Al-Ward, Shahad M; Kim, Anthony; McCann, Claire; Ruschin, Mark; Cheung, Patrick; Sahgal, Arjun; Keller, Brian M

    2018-01-01

    Targeting and tracking of central lung tumors may be feasible on the Elekta MRI-linac (MRL) due to the soft-tissue visualization capabilities of MRI. The purpose of this work is to develop a novel treatment planning methodology to simulate tracking of central lung tumors with the MRL and to quantify the benefits in OAR sparing compared with the ITV approach. Full 4D-CT datasets for five central lung cancer patients were selected to simulate the condition of having 4D-pseudo-CTs derived from 4D-MRI data available on the MRL with real-time tracking capabilities. We used the MRL treatment planning system to generate two plans: (a) with a set of MLC-defined apertures around the target at each phase of the breathing ("4D-MRL" method); (b) with a fixed set of fields encompassing the maximum inhale and exhale of the breathing cycle ("ITV" method). For both plans, dose accumulation was performed onto a reference phase. To further study the potential benefits of a 4D-MRL method, the results were stratified by tumor motion amplitude, OAR-to-tumor proximity, and the relative OAR motion (ROM). With the 4D-MRL method, the reduction in mean doses was up to 3.0 Gy and 1.9 Gy for the heart and the lung. Moreover, the lung's V12.5 Gy was spared by a maximum of 300 cc. Maximum doses to serial organs were reduced by up to 6.1 Gy, 1.5 Gy, and 9.0 Gy for the esophagus, spinal cord, and the trachea, respectively. OAR dose reduction with our method depended on the tumor motion amplitude and the ROM. Some OARs with large ROMs and in close proximity to the tumor benefited from tracking despite small tumor amplitudes. We developed a novel 4D tracking methodology for the MRL for central lung tumors and quantified the potential dosimetric benefits compared with our current ITV approach. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. Wireless physiological monitoring and ocular tracking: 3D calibration in a fully-immersive virtual health care environment.

    PubMed

    Zhang, Lelin; Chi, Yu Mike; Edelstein, Eve; Schulze, Jurgen; Gramann, Klaus; Velasquez, Alvaro; Cauwenberghs, Gert; Macagno, Eduardo

    2010-01-01

    Wireless physiological/neurological monitoring in virtual reality (VR) offers a unique opportunity for unobtrusively quantifying human responses to precisely controlled and readily modulated VR representations of health care environments. Here we present such a wireless, light-weight head-mounted system for measuring electrooculogram (EOG) and electroencephalogram (EEG) activity in human subjects interacting with and navigating in the Calit2 StarCAVE, a five-sided immersive 3-D visualization VR environment. The system can be easily expanded to include other measurements, such as cardiac activity and galvanic skin responses. We demonstrate the capacity of the system to track focus of gaze in 3-D and report a novel calibration procedure for estimating eye movements from responses to the presentation of a set of dynamic visual cues in the StarCAVE. We discuss cyber and clinical applications that include a 3-D cursor for visual navigation in VR interactive environments, and the monitoring of neurological and ocular dysfunction in vision/attention disorders.

  7. Assessment of Left Ventricular Myocardial Viability by 3-Dimensional Speckle-Tracking Echocardiography in Patients With Myocardial Infarction.

    PubMed

    Ran, Hong; Zhang, Ping-Yang; Zhang, You-Xiang; Zhang, Jian-Xin; Wu, Wen-Fang; Dong, Jing; Ma, Xiao-Wu

    2016-08-01

    To determine whether 3-dimensional (3D) speckle-tracking echocardiography could provide a new way to assess myocardial viability in patients with myocardial infarction (MI). Forty-five patients with MI underwent routine echocardiography, 2-dimensional (2D) speckle-tracking echocardiography, and 3D speckle-tracking echocardiography. Radionuclide myocardial perfusion/metabolic imaging was used as a reference standard to define viable and nonviable myocardia. Among 720 myocardial segments in 45 patients, 368 showed abnormal motion on routine echocardiography; 204 of 368 were categorized as viable on single-photon emission computed tomography/positron emission tomography (SPECT/PET), whereas 164 were defined as nonviable; 300 normal segments on SPECT/PET among 352 segments without abnormal motion on routine echocardiography were categorized as a control group. The radial, longitudinal, 3D, and area strain on 3D speckle-tracking echocardiography had significant differences between control and nonviable groups (P < .001), whereas none of the parameters had significant differences between control and viable groups. There were no significant differences in circumferential, radial, and longitudinal peak systolic strain from 2D speckle-tracking echocardiography between viable and nonviable groups. Although there was no significant difference in circumferential strain between the groups, radial and longitudinal strain from 3D speckle-tracking echocardiography decreased significantly in the nonviable group. Moreover, 3D and area strain values were lower in the nonviable segments than the viable segments. By receiver operating characteristic analysis, radial strain from 3D speckle-tracking echocardiography with a cutoff of 11.1% had sensitivity of 95.1% and specificity of 53.4% for viable segments; longitudinal strain with a cutoff of 14.3% had sensitivity of 65.2% and specificity of 65.7%; 3D strain with a cutoff of 17.4% had sensitivity of 70.6% and specificity of 77.2%; and

  8. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery

    PubMed Central

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-01-01

    Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146

  9. Feathered Detectives: Real-Time GPS Tracking of Scavenging Gulls Pinpoints Illegal Waste Dumping.

    PubMed

    Navarro, Joan; Grémillet, David; Afán, Isabel; Ramírez, Francisco; Bouten, Willem; Forero, Manuela G

    2016-01-01

    Urban waste impacts human and environmental health, and waste management has become one of the major challenges of humanity. Concurrently with new directives due to manage this human by-product, illegal dumping has become one of the most lucrative activities of organized crime. Beyond economic fraud, illegal waste disposal strongly enhances uncontrolled dissemination of human pathogens, pollutants and invasive species. Here, we demonstrate the potential of novel real-time GPS tracking of scavenging species to detect environmental crime. Specifically, we were able to detect illegal activities at an officially closed dump, which was visited recurrently by 5 of 19 GPS-tracked yellow-legged gulls (Larus michahellis). In comparison with conventional land-based surveys, GPS tracking allows a much wider and cost-efficient spatiotemporal coverage, even of the most hazardous sites, while GPS data accessibility through the internet enables rapid intervention. Our results suggest that multi-species guilds of feathered detectives equipped with GPS and cameras could help fight illegal dumping at continental scales. We encourage further experimental studies, to infer waste detection thresholds in gulls and other scavenging species exploiting human waste dumps.

  10. Handheld pose tracking using vision-inertial sensors with occlusion handling

    NASA Astrophysics Data System (ADS)

    Li, Juan; Slembrouck, Maarten; Deboeverie, Francis; Bernardos, Ana M.; Besada, Juan A.; Veelaert, Peter; Aghajan, Hamid; Casar, José R.; Philips, Wilfried

    2016-07-01

    Tracking of a handheld device's three-dimensional (3-D) position and orientation is fundamental to various application domains, including augmented reality (AR), virtual reality, and interaction in smart spaces. Existing systems still offer limited performance in terms of accuracy, robustness, computational cost, and ease of deployment. We present a low-cost, accurate, and robust system for handheld pose tracking using fused vision and inertial data. The integration of measurements from embedded accelerometers reduces the number of unknown parameters in the six-degree-of-freedom pose calculation. The proposed system requires two light-emitting diode (LED) markers to be attached to the device, which are tracked by external cameras through a robust algorithm against illumination changes. Three data fusion methods have been proposed, including the triangulation-based stereo-vision system, constraint-based stereo-vision system with occlusion handling, and triangulation-based multivision system. Real-time demonstrations of the proposed system applied to AR and 3-D gaming are also included. The accuracy assessment of the proposed system is carried out by comparing with the data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved high accuracy of few centimeters in position estimation and few degrees in orientation estimation.

  11. RPC based 5D tracking concept for high multiplicity tracking trigger

    NASA Astrophysics Data System (ADS)

    Aielli, G.; Camarri, P.; Cardarelli, R.; Di Ciaccio, A.; Distante, L.; Liberti, B.; Paolozzi, L.; Pastori, E.; Santonico, R.

    2017-01-01

    The recently approved High Luminosity LHC project (HL-LHC) and the future colliders proposals present a challenging experimental scenario, dominated by high pileup, radiation background and a bunch crossing time possibly shorter than 5 ns. This holds as well for muon systems, where RPCs can play a fundamental role in the design of the future experiments. The RPCs, thanks to their high space-time granularity, allows a sparse representation of the particle hits, in a very large parametric space containing, in addition to 3D spatial localization, also the pulse time and width associated to the avalanche charge. This 5D representation of the hits can be exploited to improve the performance of complex detectors such as muon systems and increase the discovery potential of a future experiment, by allowing a better track pileup rejection and sharper momentum resolution, an effective measurement of the particle velocity, to tag and trigger the non-ultrarelativistic particles, and the detection local multiple track events in close proximity without ambiguities. Moreover, due to the fast response, typically for RPCs of the order of a few ns, this information can be provided promptly to the lowest level trigger. We will discus theoretically and experimentally the principles and performance of this original method.

  12. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion.

    PubMed

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H; Meeks, Sanford L; Kupelian, Patrick A

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  13. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  14. ESC-Track: A computer workflow for 4-D segmentation, tracking, lineage tracing and dynamic context analysis of ESCs.

    PubMed

    Fernández-de-Manúel, Laura; Díaz-Díaz, Covadonga; Jiménez-Carretero, Daniel; Torres, Miguel; Montoya, María C

    2017-05-01

    Embryonic stem cells (ESCs) can be established as permanent cell lines, and their potential to differentiate into adult tissues has led to widespread use for studying the mechanisms and dynamics of stem cell differentiation and exploring strategies for tissue repair. Imaging live ESCs during development is now feasible due to advances in optical imaging and engineering of genetically encoded fluorescent reporters; however, a major limitation is the low spatio-temporal resolution of long-term 3-D imaging required for generational and neighboring reconstructions. Here, we present the ESC-Track (ESC-T) workflow, which includes an automated cell and nuclear segmentation and tracking tool for 4-D (3-D + time) confocal image data sets as well as a manual editing tool for visual inspection and error correction. ESC-T automatically identifies cell divisions and membrane contacts for lineage tree and neighborhood reconstruction and computes quantitative features from individual cell entities, enabling analysis of fluorescence signal dynamics and tracking of cell morphology and motion. We use ESC-T to examine Myc intensity fluctuations in the context of mouse ESC (mESC) lineage and neighborhood relationships. ESC-T is a powerful tool for evaluation of the genealogical and microenvironmental cues that maintain ESC fitness.

  15. Active contour configuration model for estimating the posterior ablative margin in image fusion of real-time ultrasound and 3D ultrasound or magnetic resonance images for radiofrequency ablation: an experimental study.

    PubMed

    Lee, Junkyo; Lee, Min Woo; Choi, Dongil; Cha, Dong Ik; Lee, Sunyoung; Kang, Tae Wook; Yang, Jehoon; Jo, Jaemoon; Bang, Won-Chul; Kim, Jongsik; Shin, Dongkuk

    2017-12-21

    The purpose of this study was to evaluate the accuracy of an active contour model for estimating the posterior ablative margin in images obtained by the fusion of real-time ultrasonography (US) and 3-dimensional (3D) US or magnetic resonance (MR) images of an experimental tumor model for radiofrequency ablation. Chickpeas (n=12) and bovine rump meat (n=12) were used as an experimental tumor model. Grayscale 3D US and T1-weighted MR images were pre-acquired for use as reference datasets. US and MR/3D US fusion was performed for one group (n=4), and US and 3D US fusion only (n=8) was performed for the other group. Half of the models in each group were completely ablated, while the other half were incompletely ablated. Hyperechoic ablation areas were extracted using an active contour model from real-time US images, and the posterior margin of the ablation zone was estimated from the anterior margin. After the experiments, the ablated pieces of bovine rump meat were cut along the electrode path and the cut planes were photographed. The US images with the estimated posterior margin were compared with the photographs and post-ablation MR images. The extracted contours of the ablation zones from 12 US fusion videos and post-ablation MR images were also matched. In the four models fused under real-time US with MR/3D US, compression from the transducer and the insertion of an electrode resulted in misregistration between the real-time US and MR images, making the estimation of the ablation zones less accurate than was achieved through fusion between real-time US and 3D US. Eight of the 12 post-ablation 3D US images were graded as good when compared with the sectioned specimens, and 10 of the 12 were graded as good in a comparison with nicotinamide adenine dinucleotide staining and histopathologic results. Estimating the posterior ablative margin using an active contour model is a feasible way of predicting the ablation area, and US/3D US fusion was more accurate than US

  16. Precise Head Tracking in Hearing Applications

    NASA Astrophysics Data System (ADS)

    Helle, A. M.; Pilinski, J.; Luhmann, T.

    2015-05-01

    The paper gives an overview about two research projects, both dealing with optical head tracking in hearing applications. As part of the project "Development of a real-time low-cost tracking system for medical and audiological problems (ELCoT)" a cost-effective single camera 3D tracking system has been developed which enables the detection of arm and head movements of human patients. Amongst others, the measuring system is designed for a new hearing test (based on the "Mainzer Kindertisch"), which analyzes the directional hearing capabilities of children in cooperation with the research project ERKI (Evaluation of acoustic sound source localization for children). As part of the research project framework "Hearing in everyday life (HALLO)" a stereo tracking system is being used for analyzing the head movement of human patients during complex acoustic events. Together with the consideration of biosignals like skin conductance the speech comprehension and listening effort of persons with reduced hearing ability, especially in situations with background noise, is evaluated. For both projects the system design, accuracy aspects and results of practical tests are discussed.

  17. Real-Time 3D Ultrasound for Physiological Monitoring 22258.

    DTIC Science & Technology

    1999-10-01

    their software to acquire positioning information using a high precision mechanical arm ( MicroScribe arm from Immersion Corp., San Jose, CA) instead of...mechanical arm (Immersion MicroScribe ™) for 3D data acquisition, also adopted by EchoTech for 3D FreeScan. • Medical quality video capture by a...MHz Dell Dimen- sion XPS computer9 (under desk), MUSTPAC-2 Vir- tual Ultrasound Probe based on the Microscribe 3D articulated arm10 (on table

  18. Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach

    PubMed Central

    Miran, Sina; Akram, Sahar; Sheikhattar, Alireza; Simon, Jonathan Z.; Zhang, Tao; Babadi, Behtash

    2018-01-01

    Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, an ability which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from non-invasive neuroimaging recordings such as magnetoencephalography (MEG) and electroencephalography (EEG). To this end, most existing approaches compute correlation-based measures by either regressing the features of each speech stream to the M/EEG channels (the decoding approach) or vice versa (the encoding approach). To produce robust results, these procedures require multiple trials for training purposes. Also, their decoding accuracy drops significantly when operating at high temporal resolutions. Thus, they are not well-suited for emerging real-time applications such as smart hearing aid devices or brain-computer interface systems, where training data might be limited and high temporal resolutions are desired. In this paper, we close this gap by developing an algorithmic pipeline for real-time decoding of the attentional state. Our proposed framework consists of three main modules: (1) Real-time and robust estimation of encoding or decoding coefficients, achieved by sparse adaptive filtering, (2) Extracting reliable markers of the attentional state, and thereby generalizing the widely-used correlation-based measures thereof, and (3) Devising a near real-time state-space estimator that translates the noisy and variable attention markers to robust and statistically interpretable estimates of the attentional state with minimal delay. Our proposed algorithms integrate various techniques including forgetting factor-based adaptive filtering, ℓ1-regularization, forward-backward splitting algorithms, fixed-lag smoothing, and Expectation Maximization. We validate the performance of our proposed

  19. Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach.

    PubMed

    Miran, Sina; Akram, Sahar; Sheikhattar, Alireza; Simon, Jonathan Z; Zhang, Tao; Babadi, Behtash

    2018-01-01

    Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, an ability which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from non-invasive neuroimaging recordings such as magnetoencephalography (MEG) and electroencephalography (EEG). To this end, most existing approaches compute correlation-based measures by either regressing the features of each speech stream to the M/EEG channels (the decoding approach) or vice versa (the encoding approach). To produce robust results, these procedures require multiple trials for training purposes. Also, their decoding accuracy drops significantly when operating at high temporal resolutions. Thus, they are not well-suited for emerging real-time applications such as smart hearing aid devices or brain-computer interface systems, where training data might be limited and high temporal resolutions are desired. In this paper, we close this gap by developing an algorithmic pipeline for real-time decoding of the attentional state. Our proposed framework consists of three main modules: (1) Real-time and robust estimation of encoding or decoding coefficients, achieved by sparse adaptive filtering, (2) Extracting reliable markers of the attentional state, and thereby generalizing the widely-used correlation-based measures thereof, and (3) Devising a near real-time state-space estimator that translates the noisy and variable attention markers to robust and statistically interpretable estimates of the attentional state with minimal delay. Our proposed algorithms integrate various techniques including forgetting factor-based adaptive filtering, ℓ 1 -regularization, forward-backward splitting algorithms, fixed-lag smoothing, and Expectation Maximization. We validate the performance of our proposed

  20. A navigation system for flexible endoscopes using abdominal 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Hoffmann, R.; Kaar, M.; Bathia, Amon; Bathia, Amar; Lampret, A.; Birkfellner, W.; Hummel, J.; Figl, M.

    2014-09-01

    A navigation system for flexible endoscopes equipped with ultrasound (US) scan heads is presented. In contrast to similar systems, abdominal 3D-US is used for image fusion of the pre-interventional computed tomography (CT) to the endoscopic US. A 3D-US scan, tracked with an optical tracking system (OTS), is taken pre-operatively together with the CT scan. The CT is calibrated using the OTS, providing the transformation from CT to 3D-US. Immediately before intervention a 3D-US tracked with an electromagnetic tracking system (EMTS) is acquired and registered intra-modal to the preoperative 3D-US. The endoscopic US is calibrated using the EMTS and registered to the pre-operative CT by an intra-modal 3D-US/3D-US registration. Phantom studies showed a registration error for the US to CT registration of 5.1 mm ± 2.8 mm. 3D-US/3D-US registration of patient data gave an error of 4.1 mm compared to 2.8 mm with the phantom. From this we estimate an error on patient experiments of 5.6 mm.

  1. A New Approach to Time-Resolved 3D-PTV

    NASA Astrophysics Data System (ADS)

    Boomsma, Aaron; Troolin, Dan; Bjorkquist, Dan; TSI Inc Team

    2017-11-01

    Volumetric three-component velocimetry via particle tracking is a powerful alternative to TomoPIV. It has been thoroughly documented that compared to TomoPIV, particle tracking velocimetry (PTV) methods (whether 2D or 3D) better resolve regions of high velocity gradient, identify fewer ghost particles, and are less computationally demanding, which results in shorter processing times. Recently, 3D-PTV has seen renewed interest in the PIV community with the availability of time-resolved data. Of course, advances in hardware are partly to thank for that availability-higher speed cameras, more effective memory management, and higher speed lasers. But in software, algorithms that utilize time resolved data to improve 3D particle reconstruction and particle tracking are also under development and advancing (e.g. shake-the-box, neighbor tracking reconstruction, etc.). .In the current study, we present a new 3D-PTV method that incorporates time-resolved data. We detail the method, its performance in terms of particle identification and reconstruction error and their relation to varying seeding densities, as well as computational performance.

  2. True-3D Strain Mapping for Assessment of Material Deformation by Synchrotron X-Ray Microtomography

    NASA Astrophysics Data System (ADS)

    Ahn, J. J.; Toda, H.; Niinomi, M.; Kobayashi, T.; Akahori, T.; Uesugi, K.

    2005-04-01

    Downsizing of products with complex shapes has been accelerated thanks to the rapid development of electrodevice manufacturing technology. Micro electromechanical systems (MEMS) are one of such typical examples. 3D strain measurement of such miniature products is needed to ensure their reliability. In the present study, as preliminary trial for it 3D tensile deformation behavior of a pure aluminum wire is examined using the synchrotron X-ray microtomography technique at Spring-8, Japan. Multipurpose in-situ tester is used to investigate real-time tensile deformation behavior of the Al wire. Tensile tests are carried out under strokes of 0, 0.005, 0.01 and 0.015mm. It measures 3D local deformation of a region of interest by tracking a relative movement of a pair of particles at each point. Local deformation behavior of the Al wire is identified to be different from macroscopic deformation behavior. It may be closely associated with underlying microstructure.

  3. True-3D Strain Mapping for Assessment of Material Deformation by Synchrotron X-Ray Microtomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, J.J.; Toda, H.; Niinomi, M.

    2005-04-09

    Downsizing of products with complex shapes has been accelerated thanks to the rapid development of electrodevice manufacturing technology. Micro electromechanical systems (MEMS) are one of such typical examples. 3D strain measurement of such miniature products is needed to ensure their reliability. In the present study, as preliminary trial for it 3D tensile deformation behavior of a pure aluminum wire is examined using the synchrotron X-ray microtomography technique at Spring-8, Japan. Multipurpose in-situ tester is used to investigate real-time tensile deformation behavior of the Al wire. Tensile tests are carried out under strokes of 0, 0.005, 0.01 and 0.015mm. It measuresmore » 3D local deformation of a region of interest by tracking a relative movement of a pair of particles at each point. Local deformation behavior of the Al wire is identified to be different from macroscopic deformation behavior. It may be closely associated with underlying microstructure.« less

  4. The first clinical implementation of a real-time six degree of freedom target tracking system during radiation therapy based on Kilovoltage Intrafraction Monitoring (KIM).

    PubMed

    Nguyen, Doan Trang; O'Brien, Ricky; Kim, Jung-Ha; Huang, Chen-Yu; Wilton, Lee; Greer, Peter; Legge, Kimberley; Booth, Jeremy T; Poulsen, Per Rugaard; Martin, Jarad; Keall, Paul J

    2017-04-01

    We present the first clinical implementation of a real-time six-degree of freedom (6DoF) Kilovoltage Intrafraction Monitoring (KIM) system which tracks the cancer target translational and rotational motions during treatment. The method was applied to measure and correct for target motion during stereotactic body radiotherapy (SBRT) for prostate cancer. Patient: A patient with prostate adenocarcinoma undergoing SBRT with 36.25Gy, delivered in 5 fractions was enrolled in the study. 6DoF KIM technology: 2D positions of three implanted gold markers in each of the kV images (125kV, 10mA at 11Hz) were acquired continuously during treatment. The 2D→3D target position estimation was based on a probability distribution function. The 3D→6DoF target rotation was calculated using an iterative closest point algorithm. The accuracy and precision of the KIM method was measured by comparing the real-time results with kV-MV triangulation. Of the five treatment fractions, KIM was utilised successfully in four fractions. The intrafraction prostate motion resulted in three couch shifts in two fractions when the prostate motion exceeded the pre-set action threshold of 2mm for more than 5s. KIM translational accuracy and precision were 0.3±0.6mm, -0.2±0.3mm and 0.2±0.7mm in the Left-Right (LR), Superior-Inferior (SI) and Anterior-Posterior (AP) directions, respectively. The KIM rotational accuracy and precision were 0.8°±2.0°, -0.5°±3.3° and 0.3°±1.6° in the roll, pitch and yaw directions, respectively. This treatment represents, to the best of our knowledge, the first time a cancer patient's tumour position and rotation have been monitored in real-time during treatment. The 6 DoF KIM system has sub-millimetre accuracy and precision in all three translational axes, and less than 1° accuracy and 4° precision in all three rotational axes. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  6. Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2014-03-01

    We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.

  7. Evaluation of an image-based tracking workflow with Kalman filtering for automatic image plane alignment in interventional MRI.

    PubMed

    Neumann, M; Cuvillon, L; Breton, E; de Matheli, M

    2013-01-01

    Recently, a workflow for magnetic resonance (MR) image plane alignment based on tracking in real-time MR images was introduced. The workflow is based on a tracking device composed of 2 resonant micro-coils and a passive marker, and allows for tracking of the passive marker in clinical real-time images and automatic (re-)initialization using the microcoils. As the Kalman filter has proven its benefit as an estimator and predictor, it is well suited for use in tracking applications. In this paper, a Kalman filter is integrated in the previously developed workflow in order to predict position and orientation of the tracking device. Measurement noise covariances of the Kalman filter are dynamically changed in order to take into account that, according to the image plane orientation, only a subset of the 3D pose components is available. The improved tracking performance of the Kalman extended workflow could be quantified in simulation results. Also, a first experiment in the MRI scanner was performed but without quantitative results yet.

  8. Real-time Avatar Animation from a Single Image.

    PubMed

    Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F

    2011-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.

  9. Real-time Avatar Animation from a Single Image

    PubMed Central

    Saragih, Jason M.; Lucey, Simon; Cohn, Jeffrey F.

    2014-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user’s facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters. PMID:24598812

  10. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    PubMed

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  11. Laser vision seam tracking system based on image processing and continuous convolution operator tracker

    NASA Astrophysics Data System (ADS)

    Zou, Yanbiao; Chen, Tao

    2018-06-01

    To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.

  12. Locomotive track detection for underground

    NASA Astrophysics Data System (ADS)

    Ma, Zhonglei; Lang, Wenhui; Li, Xiaoming; Wei, Xing

    2017-08-01

    In order to improve the PC-based track detection system, this paper proposes a method to detect linear track for underground locomotive based on DSP + FPGA. Firstly, the analog signal outputted from the camera is sampled by A / D chip. Then the collected digital signal is preprocessed by FPGA. Secondly, the output signal of FPGA is transmitted to DSP via EMIF port. Subsequently, the adaptive threshold edge detection, polar angle and radius constrain based Hough transform are implemented by DSP. Lastly, the detected track information is transmitted to host computer through Ethernet interface. The experimental results show that the system can not only meet the requirements of real-time detection, but also has good robustness.

  13. Real-time visual simulation of APT system based on RTW and Vega

    NASA Astrophysics Data System (ADS)

    Xiong, Shuai; Fu, Chengyu; Tang, Tao

    2012-10-01

    The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.

  14. 4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR

    NASA Astrophysics Data System (ADS)

    Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas

    2016-04-01

    The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten

  15. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  16. 3D track reconstruction capability of a silicon hybrid active pixel detector

    NASA Astrophysics Data System (ADS)

    Bergmann, Benedikt; Pichotka, Martin; Pospisil, Stanislav; Vycpalek, Jiri; Burian, Petr; Broulim, Pavel; Jakubek, Jan

    2017-06-01

    Timepix3 detectors are the latest generation of hybrid active pixel detectors of the Medipix/Timepix family. Such detectors consist of an active sensor layer which is connected to the readout ASIC (application specific integrated circuit), segmenting the detector into a square matrix of 256 × 256 pixels (pixel pitch 55 μm). Particles interacting in the active sensor material create charge carriers, which drift towards the pixelated electrode, where they are collected. In each pixel, the time of the interaction (time resolution 1.56 ns) and the amount of created charge carriers are measured. Such a device was employed in an experiment in a 120 GeV/c pion beam. It is demonstrated, how the drift time information can be used for "4D" particle tracking, with the three spatial dimensions and the energy losses along the particle trajectory (dE/dx). Since the coordinates in the detector plane are given by the pixelation ( x, y), the x- and y-resolution is determined by the pixel pitch (55 μm). A z-resolution of 50.4 μm could be achieved (for a 500 μm thick silicon sensor at 130 V bias), whereby the drift time model independent z-resolution was found to be 28.5 μm.

  17. Longitudinal measurement of extracellular matrix rigidity in 3D tumor models using particle-tracking microrheology.

    PubMed

    Jones, Dustin P; Hanna, William; El-Hamidi, Hamid; Celli, Jonathan P

    2014-06-10

    The mechanical microenvironment has been shown to act as a crucial regulator of tumor growth behavior and signaling, which is itself remodeled and modified as part of a set of complex, two-way mechanosensitive interactions. While the development of biologically-relevant 3D tumor models have facilitated mechanistic studies on the impact of matrix rheology on tumor growth, the inverse problem of mapping changes in the mechanical environment induced by tumors remains challenging. Here, we describe the implementation of particle-tracking microrheology (PTM) in conjunction with 3D models of pancreatic cancer as part of a robust and viable approach for longitudinally monitoring physical changes in the tumor microenvironment, in situ. The methodology described here integrates a system of preparing in vitro 3D models embedded in a model extracellular matrix (ECM) scaffold of Type I collagen with fluorescently labeled probes uniformly distributed for position- and time-dependent microrheology measurements throughout the specimen. In vitro tumors are plated and probed in parallel conditions using multiwell imaging plates. Drawing on established methods, videos of tracer probe movements are transformed via the Generalized Stokes Einstein Relation (GSER) to report the complex frequency-dependent viscoelastic shear modulus, G*(ω). Because this approach is imaging-based, mechanical characterization is also mapped onto large transmitted-light spatial fields to simultaneously report qualitative changes in 3D tumor size and phenotype. Representative results showing contrasting mechanical response in sub-regions associated with localized invasion-induced matrix degradation as well as system calibration, validation data are presented. Undesirable outcomes from common experimental errors and troubleshooting of these issues are also presented. The 96-well 3D culture plating format implemented in this protocol is conducive to correlation of microrheology measurements with therapeutic

  18. An FPGA-Based Real-Time Maximum Likelihood 3D Position Estimation for a Continuous Crystal PET Detector

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Xiao, Yong; Cheng, Xinyi; Li, Deng; Wang, Liwei

    2016-02-01

    For the continuous crystal-based positron emission tomography (PET) detector built in our lab, a maximum likelihood algorithm adapted for implementation on a field programmable gate array (FPGA) is proposed to estimate the three-dimensional (3D) coordinate of interaction position with the single-end detected scintillation light response. The row-sum and column-sum readout scheme organizes the 64 channels of photomultiplier (PMT) into eight row signals and eight column signals to be readout for X- and Y-coordinates estimation independently. By the reference events irradiated in a known oblique angle, the probability density function (PDF) for each depth-of-interaction (DOI) segment is generated, by which the reference events in perpendicular irradiation are assigned to DOI segments for generating the PDFs for X and Y estimation in each DOI layer. Evaluated by the experimental data, the algorithm achieves an average X resolution of 1.69 mm along the central X-axis, and DOI resolution of 3.70 mm over the whole thickness (0-10 mm) of crystal. The performance improvements from 2D estimation to the 3D algorithm are also presented. Benefiting from abundant resources of FPGA and a hierarchical storage arrangement, the whole algorithm can be implemented into a middle-scale FPGA. By a parallel structure in pipelines, the 3D position estimator on the FPGA can achieve a processing throughput of 15 M events/s, which is sufficient for the requirement of real-time PET imaging.

  19. Left atrial function in patients with light chain amyloidosis: A transthoracic 3D speckle tracking imaging study.

    PubMed

    Mohty, Dania; Petitalot, Vincent; Magne, Julien; Fadel, Bahaa M; Boulogne, Cyrille; Rouabhia, Dounia; ElHamel, Chahrazed; Lavergne, David; Damy, Thibaud; Aboyans, Victor; Jaccard, Arnaud

    2018-04-01

    Systemic light chain amyloidosis (AL) is characterized by the extracellular deposition of amyloid fibrils. Transthoracic echocardiography is the modality of choice to assess cardiac function in patients with AL. Whereas left ventricular (LV) function has been well studied in this patient population, data regarding the value of left atrial (LA) function in AL patients are lacking. In this study, we aim to examine the impact of LA volumes and function on survival in AL patients as assessed by real-time 3D echocardiography. A total of 77 patients (67±10 years, 60% men) with confirmed AL and 39 healthy controls were included. All standard 2D echocardiographic and 3D-LA parameters were obtained. Fourteen patients (18%) were in Mayo Clinic (MC) stage I, 30 (39%) in stage II, and 33 (43%) in stage III at initial evaluation. There was no significant difference among the MC stages groups in terms of age, gender, or cardiovascular risk factors. As compared to patients in MC II and MC I, those in MC III had significantly larger indexed 3D-LA volumes (MCIII: 46±15mL/m 2 , MC II: 38±12mL/m 2 , and MC I: 23±9mL/m 2 , p<0.0001), lower 3D-LA total emptying fraction (3D-tLAEF) (21±13% vs. 31±15% vs. 43±7%, respectively, p<0.0001), and worse 3D peak atrial longitudinal strain (3D-PALS) (11±9% vs. 18±13% vs. 20±7%, respectively, p=0.007). Two-year survival was significantly lower in patients with 3D-tLAEF <+34% (p=0.003) and in those with 3D-PALS <+14% (p=0.034). Both parameters provided incremental prognostic value over maximal LA volume in multivariate analysis. Functional LA parameters are progressively altered in AL patients according to the MC stage. A decrease in 3D-PALS is associated with worse outcome, independently of LA volume. Copyright © 2017 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  20. Neuromorphic Event-Based 3D Pose Estimation

    PubMed Central

    Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.

    2016-01-01

    Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547

  1. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy.

    PubMed

    Li, Ruijiang; Lewis, John H; Jia, Xun; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Song, William Y; Jiang, Steve B

    2011-05-01

    To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D

  2. Feathered Detectives: Real-Time GPS Tracking of Scavenging Gulls Pinpoints Illegal Waste Dumping

    PubMed Central

    Grémillet, David; Afán, Isabel; Ramírez, Francisco; Bouten, Willem; Forero, Manuela G.

    2016-01-01

    Urban waste impacts human and environmental health, and waste management has become one of the major challenges of humanity. Concurrently with new directives due to manage this human by-product, illegal dumping has become one of the most lucrative activities of organized crime. Beyond economic fraud, illegal waste disposal strongly enhances uncontrolled dissemination of human pathogens, pollutants and invasive species. Here, we demonstrate the potential of novel real-time GPS tracking of scavenging species to detect environmental crime. Specifically, we were able to detect illegal activities at an officially closed dump, which was visited recurrently by 5 of 19 GPS-tracked yellow-legged gulls (Larus michahellis). In comparison with conventional land-based surveys, GPS tracking allows a much wider and cost-efficient spatiotemporal coverage, even of the most hazardous sites, while GPS data accessibility through the internet enables rapid intervention. Our results suggest that multi-species guilds of feathered detectives equipped with GPS and cameras could help fight illegal dumping at continental scales. We encourage further experimental studies, to infer waste detection thresholds in gulls and other scavenging species exploiting human waste dumps. PMID:27448048

  3. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  4. Rapid fusion of 2D X-ray fluoroscopy with 3D multislice CT for image-guided electrophysiology procedures

    NASA Astrophysics Data System (ADS)

    Zagorchev, Lyubomir; Manzke, Robert; Cury, Ricardo; Reddy, Vivek Y.; Chan, Raymond C.

    2007-03-01

    Interventional cardiac electrophysiology (EP) procedures are typically performed under X-ray fluoroscopy for visualizing catheters and EP devices relative to other highly-attenuating structures such as the thoracic spine and ribs. These projections do not however contain information about soft-tissue anatomy and there is a recognized need for fusion of conventional fluoroscopy with pre-operatively acquired cardiac multislice computed tomography (MSCT) volumes. Rapid 2D-3D integration in this application would allow for real-time visualization of all catheters present within the thorax in relation to the cardiovascular anatomy visible in MSCT. We present a method for rapid fusion of 2D X-ray fluoroscopy with 3DMSCT that can facilitate EP mapping and interventional procedures by reducing the need for intra-operative contrast injections to visualize heart chambers and specialized systems to track catheters within the cardiovascular anatomy. We use hardware-accelerated ray-casting to compute digitally reconstructed radiographs (DRRs) from the MSCT volume and iteratively optimize the rigid-body pose of the volumetric data to maximize the similarity between the MSCT-derived DRR and the intra-operative X-ray projection data.

  5. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    PubMed

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  6. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  7. Integrated Ultra-Wideband Tracking and Carbon Dioxide Sensing System Design for International Space Station Applications

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun (David); Hafermalz, David; Dusl, John; Barton, Rick; Wagner, Ray; Ngo, Phong

    2015-01-01

    A three-dimensional (3D) Ultra-Wideband (UWB) Time-of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide (CO2) and felt upset. Recent findings indicate that frequent, short-term crew exposure to elevated CO2 levels combined with other physiological impacts of microgravity may lead to a number of detrimental effects, including loss of vision. To evaluate the risks associated with transient elevated CO2 levels and design effective countermeasures, doctors must have access to frequent CO2 measurements in the immediate vicinity of individual crew members along with simultaneous measurements of their location in the space environment. To achieve this goal, a small, low-power, wearable system that integrates an accurate CO2 sensor with an ultra-wideband (UWB) radio capable of real-time location estimation and data communication is proposed. This system would be worn by crew members or mounted on a free-flyer and would automatically gather and transmit sampled sensor data tagged with real-time, high-resolution location information. Under the current proposed effort, a breadboard prototype of such a system has been developed. Although the initial effort is targeted to CO2 monitoring, the concept is applicable to other types of sensors. For the initial effort, a micro-controller is leveraged to integrate a low-power CO2 sensor with a commercially available UWB radio system with ranging capability. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested

  8. Novel intelligent real-time position tracking system using FPGA and fuzzy logic.

    PubMed

    Soares dos Santos, Marco P; Ferreira, J A F

    2014-03-01

    The main aim of this paper is to test if FPGAs are able to achieve better position tracking performance than software-based soft real-time platforms. For comparison purposes, the same controller design was implemented in these architectures. A Multi-state Fuzzy Logic controller (FLC) was implemented both in a Xilinx(®) Virtex-II FPGA (XC2v1000) and in a soft real-time platform NI CompactRIO(®)-9002. The same sampling time was used. The comparative tests were conducted using a servo-pneumatic actuation system. Steady-state errors lower than 4 μm were reached for an arbitrary vertical positioning of a 6.2 kg mass when the controller was embedded into the FPGA platform. Performance gains up to 16 times in the steady-state error, up to 27 times in the overshoot and up to 19.5 times in the settling time were achieved by using the FPGA-based controller over the software-based FLC controller. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Real-time 3D visualization of cellular rearrangements during cardiac valve formation

    PubMed Central

    Pestel, Jenny; Ramadass, Radhan; Gauvrit, Sebastien; Helker, Christian; Herzog, Wiebke

    2016-01-01

    During cardiac valve development, the single-layered endocardial sheet at the atrioventricular canal (AVC) is remodeled into multilayered immature valve leaflets. Most of our knowledge about this process comes from examining fixed samples that do not allow a real-time appreciation of the intricacies of valve formation. Here, we exploit non-invasive in vivo imaging techniques to identify the dynamic cell behaviors that lead to the formation of the immature valve leaflets. We find that in zebrafish, the valve leaflets consist of two sets of endocardial cells at the luminal and abluminal side, which we refer to as luminal cells (LCs) and abluminal cells (ALCs), respectively. By analyzing cellular rearrangements during valve formation, we observed that the LCs and ALCs originate from the atrium and ventricle, respectively. Furthermore, we utilized Wnt/β-catenin and Notch signaling reporter lines to distinguish between the LCs and ALCs, and also found that cardiac contractility and/or blood flow is necessary for the endocardial expression of these signaling reporters. Thus, our 3D analyses of cardiac valve formation in zebrafish provide fundamental insights into the cellular rearrangements underlying this process. PMID:27302398

  10. Real-time 3D visualization of cellular rearrangements during cardiac valve formation.

    PubMed

    Pestel, Jenny; Ramadass, Radhan; Gauvrit, Sebastien; Helker, Christian; Herzog, Wiebke; Stainier, Didier Y R

    2016-06-15

    During cardiac valve development, the single-layered endocardial sheet at the atrioventricular canal (AVC) is remodeled into multilayered immature valve leaflets. Most of our knowledge about this process comes from examining fixed samples that do not allow a real-time appreciation of the intricacies of valve formation. Here, we exploit non-invasive in vivo imaging techniques to identify the dynamic cell behaviors that lead to the formation of the immature valve leaflets. We find that in zebrafish, the valve leaflets consist of two sets of endocardial cells at the luminal and abluminal side, which we refer to as luminal cells (LCs) and abluminal cells (ALCs), respectively. By analyzing cellular rearrangements during valve formation, we observed that the LCs and ALCs originate from the atrium and ventricle, respectively. Furthermore, we utilized Wnt/β-catenin and Notch signaling reporter lines to distinguish between the LCs and ALCs, and also found that cardiac contractility and/or blood flow is necessary for the endocardial expression of these signaling reporters. Thus, our 3D analyses of cardiac valve formation in zebrafish provide fundamental insights into the cellular rearrangements underlying this process. © 2016. Published by The Company of Biologists Ltd.

  11. Development of a four-dimensional Monte Carlo dose calculation system for real-time tumor-tracking irradiation with a gimbaled X-ray head.

    PubMed

    Ishihara, Yoshitomo; Nakamura, Mitsuhiro; Miyabe, Yuki; Mukumoto, Nobutaka; Matsuo, Yukinori; Sawada, Akira; Kokubo, Masaki; Mizowaki, Takashi; Hiraoka, Masahiro

    2017-03-01

    To develop a four-dimensional (4D) dose calculation system for real-time tumor tracking (RTTT) irradiation by the Vero4DRT. First, a 6-MV photon beam delivered by the Vero4DRT was simulated using EGSnrc. A moving phantom position was directly measured by a laser displacement gauge. The pan and tilt angles, monitor units, and the indexing time indicating the phantom position were also extracted from a log file. Next, phase space data at any angle were created from both the log file and particle data under the dynamic multileaf collimator. Irradiation both with and without RTTT, with the phantom moving, were simulated using several treatment field sizes. Each was compared with the corresponding measurement using films. Finally, dose calculation for each computed tomography dataset of 10 respiratory phases with the X-ray head rotated was performed to simulate the RTTT irradiation (4D plan) for lung, liver, and pancreatic cancer patients. Dose-volume histograms of the 4D plan were compared with those calculated on the single reference respiratory phase without the gimbal rotation [three-dimensional (3D) plan]. Differences between the simulated and measured doses were less than 3% for RTTT irradiation in most areas, except the high-dose gradient. For clinical cases, the target coverage in 4D plans was almost identical to that of the 3D plans. However, the doses to organs at risk in the 4D plans varied at intermediate- and low-dose levels. Our proposed system has acceptable accuracy for RTTT irradiation in the Vero4DRT and is capable of simulating clinical RTTT plans. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. A detailed study of FDIRC prototype with waveform digitizing electronics in cosmic ray telescope using 3D tracks

    NASA Astrophysics Data System (ADS)

    Nishimura, K.; Dey, B.; Aston, D.; Leith, D. W. G. S.; Ratcliff, B.; Roberts, D.; Ruckman, L.; Shtol, D.; Varner, G. S.; Va'vra, J.

    2013-02-01

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from 384 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of ∼2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with ∼1.5 mrad angular resolution and muon energy of Emuon> 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of reconstruction ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  13. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  14. Towards real-time detection and tracking of spatio-temporal features: Blob-filaments in fusion plasma

    DOE PAGES

    Wu, Lingfei; Wu, Kesheng; Sim, Alex; ...

    2016-06-01

    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes tomore » detect and track blob-filaments in real time in fusion plasma. Here, on a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.« less

  15. Real-Time Correction By Optical Tracking with Integrated Geometric Distortion Correction for Reducing Motion Artifacts in fMRI

    NASA Astrophysics Data System (ADS)

    Rotenberg, David J.

    Artifacts caused by head motion are a substantial source of error in fMRI that limits its use in neuroscience research and clinical settings. Real-time scan-plane correction by optical tracking has been shown to correct slice misalignment and non-linear spin-history artifacts, however residual artifacts due to dynamic magnetic field non-uniformity may remain in the data. A recently developed correction technique, PLACE, can correct for absolute geometric distortion using the complex image data from two EPI images, with slightly shifted k-space trajectories. We present a correction approach that integrates PLACE into a real-time scan-plane update system by optical tracking, applied to a tissue-equivalent phantom undergoing complex motion and an fMRI finger tapping experiment with overt head motion to induce dynamic field non-uniformity. Experiments suggest that including volume by volume geometric distortion correction by PLACE can suppress dynamic geometric distortion artifacts in a phantom and in vivo and provide more robust activation maps.

  16. Correlation and 3D-tracking of objects by pointing sensors

    DOEpatents

    Griesmeyer, J. Michael

    2017-04-04

    A method and system for tracking at least one object using a plurality of pointing sensors and a tracking system are disclosed herein. In a general embodiment, the tracking system is configured to receive a series of observation data relative to the at least one object over a time base for each of the plurality of pointing sensors. The observation data may include sensor position data, pointing vector data and observation error data. The tracking system may further determine a triangulation point using a magnitude of a shortest line connecting a line of sight value from each of the series of observation data from each of the plurality of sensors to the at least one object, and perform correlation processing on the observation data and triangulation point to determine if at least two of the plurality of sensors are tracking the same object. Observation data may also be branched, associated and pruned using new incoming observation data.

  17. Development of a real-time wave field reconstruction TEM system (II): correction of coma aberration and 3-fold astigmatism, and real-time correction of 2-fold astigmatism.

    PubMed

    Tamura, Takahiro; Kimura, Yoshihide; Takai, Yoshizo

    2018-02-01

    In this study, a function for the correction of coma aberration, 3-fold astigmatism and real-time correction of 2-fold astigmatism was newly incorporated into a recently developed real-time wave field reconstruction TEM system. The aberration correction function was developed by modifying the image-processing software previously designed for auto focus tracking, as described in the first article of this series. Using the newly developed system, the coma aberration and 3-fold astigmatism were corrected using the aberration coefficients obtained experimentally before the processing was carried out. In this study, these aberration coefficients were estimated from an apparent 2-fold astigmatism induced under tilted-illumination conditions. In contrast, 2-fold astigmatism could be measured and corrected in real time from the reconstructed wave field. Here, the measurement precision for 2-fold astigmatism was found to be ±0.4 nm and ±2°. All of these aberration corrections, as well as auto focus tracking, were performed at a video frame rate of 1/30 s. Thus, the proposed novel system is promising for quantitative and reliable in situ observations, particularly in environmental TEM applications.

  18. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  19. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  20. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging.

    PubMed

    Jiang, J; Hall, T J

    2007-07-07

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s(-1)) that exceed our previous methods.

  1. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Hall, T. J.

    2007-07-01

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows® system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s-1) that exceed our previous methods.

  2. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We

  3. Magnetic marker monitoring: high resolution real-time tracking of oral solid dosage forms in the gastrointestinal tract.

    PubMed

    Weitschies, Werner; Blume, Henning; Mönnikes, Hubert

    2010-01-01

    Knowledge about the performance of dosage forms in the gastrointestinal tract is essential for the development of new oral delivery systems, as well as for the choice of the optimal formulation technology. Magnetic Marker Monitoring (MMM) is an imaging technology for the investigation of the behaviour of solid oral dosage forms within the gastrointestinal tract, which is based on the labelling of solid dosage forms as a magnetic dipole and determination of the location, orientation and strength of the dipole after oral administration using measurement equipment and localization methods that are established in biomagnetism. MMM enables the investigation of the performance of solid dosage forms in the gastrointestinal tract with a temporal resolution in the range of a few milliseconds and a spatial resolution in 3D in the range of some millimetres. Thereby, MMM provides real-time tracking of dosage forms in the gastrointestinal tract. MMM is also suitable for the determination of dosage form disintegration and for quantitative measurement of in vivo drug release in case of appropriate extended release dosage forms like hydrogel-forming matrix tablets. The combination of MMM with pharmacokinetic measurements (pharmacomagnetography) enables the determination of in vitro-in vivo correlations (IVIC) and the delineation of absorption sites in the gastrointestinal tract. Copyright 2009 Elsevier B.V. All rights reserved.

  4. Annular dynamics of memo3D annuloplasty ring evaluated by 3D transesophageal echocardiography.

    PubMed

    Nishi, Hiroyuki; Toda, Koichi; Miyagawa, Shigeru; Yoshikawa, Yasushi; Fukushima, Satsuki; Yoshioka, Daisuke; Sawa, Yoshiki

    2018-04-01

    We assessed the mitral annular motion after mitral valve repair with the Sorin Memo 3D® (Sorin Group Italia S.r.L., Saluggia, Italy), which is a unique complete semirigid annuloplasty ring intended to restore the systolic profile of the mitral annulus while adapting to the physiologic dynamism of the annulus, using transesophageal real-time three-dimensional echocardiography. 17 patients (12 male; mean age 60.4 ± 14.9 years) who underwent mitral annuloplasty using the Memo 3D ring were investigated. Mitral annular motion was assessed using QLAB®version8 allowing for a full evaluation of the mitral annulus dynamics. The mitral annular dimensions were measured throughout the cardiac cycle using 4D MV assessment2® while saddle shape was assessed through sequential measurements by RealView®. Saddle shape configuration of the mitral annulus and posterior and anterior leaflet motion could be observed during systole and diastole. The mitral annular area changed during the cardiac cycle by 5.7 ± 1.8%.The circumference length and diameter also changed throughout the cardiac cycle. The annular height was significantly higher in mid-systole than in mid-diastole (p < 0.05). The Memo 3D ring maintained a physiological saddle-shape configuration throughout the cardiac cycle. Real-time three-dimensional echocardiography analysis confirmed the motion and flexibility of the Memo 3D ring upon implantation.

  5. Protein 3D Structure and Electron Microscopy Map Retrieval Using 3D-SURFER2.0 and EM-SURFER.

    PubMed

    Han, Xusi; Wei, Qing; Kihara, Daisuke

    2017-12-08

    With the rapid growth in the number of solved protein structures stored in the Protein Data Bank (PDB) and the Electron Microscopy Data Bank (EMDB), it is essential to develop tools to perform real-time structure similarity searches against the entire structure database. Since conventional structure alignment methods need to sample different orientations of proteins in the three-dimensional space, they are time consuming and unsuitable for rapid, real-time database searches. To this end, we have developed 3D-SURFER and EM-SURFER, which utilize 3D Zernike descriptors (3DZD) to conduct high-throughput protein structure comparison, visualization, and analysis. Taking an atomic structure or an electron microscopy map of a protein or a protein complex as input, the 3DZD of a query protein is computed and compared with the 3DZD of all other proteins in PDB or EMDB. In addition, local geometrical characteristics of a query protein can be analyzed using VisGrid and LIGSITE CSC in 3D-SURFER. This article describes how to use 3D-SURFER and EM-SURFER to carry out protein surface shape similarity searches, local geometric feature analysis, and interpretation of the search results. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  6. Real-Time Hand Posture Recognition Using a Range Camera

    NASA Astrophysics Data System (ADS)

    Lahamy, Herve

    The basic goal of human computer interaction is to improve the interaction between users and computers by making computers more usable and receptive to the user's needs. Within this context, the use of hand postures in replacement of traditional devices such as keyboards, mice and joysticks is being explored by many researchers. The goal is to interpret human postures via mathematical algorithms. Hand posture recognition has gained popularity in recent years, and could become the future tool for humans to interact with computers or virtual environments. An exhaustive description of the frequently used methods available in literature for hand posture recognition is provided. It focuses on the different types of sensors and data used, the segmentation and tracking methods, the features used to represent the hand postures as well as the classifiers considered in the recognition process. Those methods are usually presented as highly robust with a recognition rate close to 100%. However, a couple of critical points necessary for a successful real-time hand posture recognition system require major improvement. Those points include the features used to represent the hand segment, the number of postures simultaneously recognizable, the invariance of the features with respect to rotation, translation and scale and also the behavior of the classifiers against non-perfect hand segments for example segments including part of the arm or missing part of the palm. A 3D time-of-flight camera named SR4000 has been chosen to develop a new methodology because of its capability to provide in real-time and at high frame rate 3D information on the scene imaged. This sensor has been described and evaluated for its capability for capturing in real-time a moving hand. A new recognition method that uses the 3D information provided by the range camera to recognize hand postures has been proposed. The different steps of this methodology including the segmentation, the tracking, the hand

  7. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glitzner, M; Lagendijk, J; Raaymakers, B

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axialmore » volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  8. Tracking dynamic team activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tambe, M.

    1996-12-31

    AI researchers are striving to build complex multi-agent worlds with intended applications ranging from the RoboCup robotic soccer tournaments, to interactive virtual theatre, to large-scale real-world battlefield simulations. Agent tracking - monitoring other agent`s actions and inferring their higher-level goals and intentions - is a central requirement in such worlds. While previous work has mostly focused on tracking individual agents, this paper goes beyond by focusing on agent teams. Team tracking poses the challenge of tracking a team`s joint goals and plans. Dynamic, real-time environments add to the challenge, as ambiguities have to be resolved in real-time. The central hypothesismore » underlying the present work is that an explicit team-oriented perspective enables effective team tracking. This hypothesis is instantiated using the model tracing technology employed in tracking individual agents. Thus, to track team activities, team models are put to service. Team models are a concrete application of the joint intentions framework and enable an agent to track team activities, regardless of the agent`s being a collaborative participant or a non-participant in the team. To facilitate real-time ambiguity resolution with team models: (i) aspects of tracking are cast as constraint satisfaction problems to exploit constraint propagation techniques; and (ii) a cost minimality criterion is applied to constrain tracking search. Empirical results from two separate tasks in real-world, dynamic environments one collaborative and one competitive - are provided.« less

  9. An efficient quasi-3D particle tracking-based approach for transport through fractures with application to dynamic dispersion calculation.

    PubMed

    Wang, Lichun; Cardenas, M Bayani

    2015-08-01

    The quantitative study of transport through fractured media has continued for many decades, but has often been constrained by observational and computational challenges. Here, we developed an efficient quasi-3D random walk particle tracking (RWPT) algorithm to simulate solute transport through natural fractures based on a 2D flow field generated from the modified local cubic law (MLCL). As a reference, we also modeled the actual breakthrough curves (BTCs) through direct simulations with the 3D advection-diffusion equation (ADE) and Navier-Stokes equations. The RWPT algorithm along with the MLCL accurately reproduced the actual BTCs calculated with the 3D ADE. The BTCs exhibited non-Fickian behavior, including early arrival and long tails. Using the spatial information of particle trajectories, we further analyzed the dynamic dispersion process through moment analysis. From this, asymptotic time scales were determined for solute dispersion to distinguish non-Fickian from Fickian regimes. This analysis illustrates the advantage and benefit of using an efficient combination of flow modeling and RWPT. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Real-time interactive 3D computer stereography for recreational applications

    NASA Astrophysics Data System (ADS)

    Miyazawa, Atsushi; Ishii, Motonaga; Okuzawa, Kazunori; Sakamoto, Ryuuichi

    2008-02-01

    With the increasing calculation costs of 3D computer stereography, low-cost, high-speed implementation of the latter requires effective distribution of computing resources. In this paper, we attempt to re-classify 3D display technologies on the basis of humans' 3D perception, in order to determine what level of presence or reality is required in recreational video game systems. We then discuss the design and implementation of stereography systems in two categories of the new classification.

  11. A real-time multi-scale 2D Gaussian filter based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin

    2014-11-01

    Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.

  12. Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish

    PubMed Central

    Maaswinkel, Hans; Zhu, Liqun; Weng, Wei

    2013-01-01

    Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals. PMID:24336189

  13. Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments

    PubMed Central

    Mossel, Annette

    2015-01-01

    In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388

  14. A Real-Time Orbit Determination Method for Smooth Transition from Optical Tracking to Laser Ranging of Debris

    PubMed Central

    Li, Bin; Sang, Jizhang; Zhang, Zhongping

    2016-01-01

    A critical requirement to achieve high efficiency of debris laser tracking is to have sufficiently accurate orbit predictions (OP) in both the pointing direction (better than 20 arc seconds) and distance from the tracking station to the debris objects, with the former more important than the latter because of the narrow laser beam. When the two line element (TLE) is used to provide the orbit predictions, the resultant pointing errors are usually on the order of tens to hundreds of arc seconds. In practice, therefore, angular observations of debris objects are first collected using an optical tracking sensor, and then used to guide the laser beam pointing to the objects. The manual guidance may cause interrupts to the laser tracking, and consequently loss of valuable laser tracking data. This paper presents a real-time orbit determination (OD) and prediction method to realize smooth and efficient debris laser tracking. The method uses TLE-computed positions and angles over a short-arc of less than 2 min as observations in an OD process where simplified force models are considered. After the OD convergence, the OP is performed from the last observation epoch to the end of the tracking pass. Simulation and real tracking data processing results show that the pointing prediction errors are usually less than 10″, and the distance errors less than 100 m, therefore, the prediction accuracy is sufficient for the blind laser tracking. PMID:27347958

  15. A GPU-Accelerated 3-D Coupled Subsample Estimation Algorithm for Volumetric Breast Strain Elastography.

    PubMed

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-04-01

    Our primary objective of this paper was to extend a previously published 2-D coupled subsample tracking algorithm for 3-D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3-D coupled subsample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking phantom and in vivo breast ultrasound data. The performance of this 3-D subsample tracking algorithm was compared with the conventional 3-D quadratic subsample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3-D subsample estimation algorithm can provide high-quality strain data (i.e., high correlation between the predeformation and the motion-compensated postdeformation radio frequency echo data and high contrast-to-noise ratio strain images), as compared with the conventional 3-D quadratic subsample algorithm. Using the GPU implementation of the 3-D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 s per volume [2.5 cm ×2.5 cm ×2.5 cm]).

  16. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S; Zhao, S; Chen, Y

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method whilemore » the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed

  17. Tracking a head-mounted display in a room-sized environment with head-mounted cameras

    NASA Astrophysics Data System (ADS)

    Wang, Jih-Fang; Azuma, Ronald T.; Bishop, Gary; Chi, Vernon; Eyles, John; Fuchs, Henry

    1990-10-01

    This paper presents our efforts to accurately track a Head-Mounted Display (HMD) in a large environment. We review our current benchtop prototype (introduced in {WCF9O]), then describe our plans for building the full-scale system. Both systems use an inside-oui optical tracking scheme, where lateraleffect photodiodes mounted on the user's helmet view flashing infrared beacons placed in the environment. Church's method uses the measured 2D image positions and the known 3D beacon locations to recover the 3D position and orientation of the helmet in real-time. We discuss the implementation and performance of the benchtop prototype. The full-scale system design includes ceiling panels that hold the infrared beacons and a new sensor arrangement of two photodiodes with holographic lenses. In the full-scale system, the user can walk almost anywhere under the grid of ceiling panels, making the working volume nearly as large as the room.

  18. 3D wide field-of-view Gabor-domain optical coherence microscopy advancing real-time in-vivo imaging and metrology

    NASA Astrophysics Data System (ADS)

    Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.

    2017-02-01

    Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.

  19. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  20. A Comparison of Accuracy of Image- versus Hardware-based Tracking Technologies in 3D Fusion in Aortic Endografting.

    PubMed

    Rolls, A E; Maurel, B; Davis, M; Constantinou, J; Hamilton, G; Mastracci, T M

    2016-09-01

    Fusion of three-dimensional (3D) computed tomography and intraoperative two-dimensional imaging in endovascular surgery relies on manual rigid co-registration of bony landmarks and tracking of hardware to provide a 3D overlay (hardware-based tracking, HWT). An alternative technique (image-based tracking, IMT) uses image recognition to register and place the fusion mask. We present preliminary experience with an agnostic fusion technology that uses IMT, with the aim of comparing the accuracy of overlay for this technology with HWT. Data were collected prospectively for 12 patients. All devices were deployed using both IMT and HWT fusion assistance concurrently. Postoperative analysis of both systems was performed by three blinded expert observers, from selected time-points during the procedures, using the displacement of fusion rings, the overlay of vascular markings and the true ostia of renal arteries. The Mean overlay error and the deviation from mean error was derived using image analysis software. Comparison of the mean overlay error was made between IMT and HWT. The validity of the point-picking technique was assessed. IMT was successful in all of the first 12 cases, whereas technical learning curve challenges thwarted HWT in four cases. When independent operators assessed the degree of accuracy of the overlay, the median error for IMT was 3.9 mm (IQR 2.89-6.24, max 9.5) versus 8.64 mm (IQR 6.1-16.8, max 24.5) for HWT (p = .001). Variance per observer was 0.69 mm(2) and 95% limit of agreement ±1.63. In this preliminary study, the error of magnitude of displacement from the "true anatomy" during image overlay in IMT was less than for HWT. This confirms that ongoing manual re-registration, as recommended by the manufacturer, should be performed for HWT systems to maintain accuracy. The error in position of the fusion markers for IMT was consistent, thus may be considered predictable. Copyright © 2016 European Society for Vascular Surgery. Published by

  1. Fusion of current technologies with real-time 3D MEMS ladar for novel security and defense applications

    NASA Astrophysics Data System (ADS)

    Siepmann, James P.

    2006-05-01

    Through the utilization of scanning MEMS mirrors in ladar devices, a whole new range of potential military, Homeland Security, law enforcement, and civilian applications is now possible. Currently, ladar devices are typically large (>15,000 cc), heavy (>15 kg), and expensive (>$100,000) while current MEMS ladar designs are more than a magnitude less, opening up a myriad of potential new applications. One such application with current technology is a GPS integrated MEMS ladar unit, which could be used for real-time border monitoring or the creation of virtual 3D battlefields after being dropped or propelled into hostile territory. Another current technology that can be integrated into a MEMS ladar unit is digital video that can give high resolution and true color to a picture that is then enhanced with range information in a real-time display format that is easier for the user to understand and assimilate than typical gray-scale or false color images. The problem with using 2-axis MEMS mirrors in ladar devices is that in order to have a resonance frequency capable of practical real-time scanning, they must either be quite small and/or have a low maximum tilt angle. Typically, this value has been less than (< or = to 10 mg-mm2-kHz2)-degrees. We have been able to solve this problem by using angle amplification techniques that utilize a series of MEMS mirrors and/or a specialized set of optics to achieve a broad field of view. These techniques and some of their novel applications mentioned will be explained and discussed herein.

  2. 2-D Versus 3-D Cross-Correlation-Based Radial and Circumferential Strain Estimation Using Multiplane 2-D Ultrafast Ultrasound in a 3-D Atherosclerotic Carotid Artery Model.

    PubMed

    Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L

    2016-10-01

    Three-dimensional (3-D) strain estimation might improve the detection and localization of high strain regions in the carotid artery (CA) for identification of vulnerable plaques. This paper compares 2-D versus 3-D displacement estimation in terms of radial and circumferential strain using simulated ultrasound (US) images of a patient-specific 3-D atherosclerotic CA model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on the literature data. A Philips L11-3 linear array transducer was simulated, which transmitted plane waves at three alternating angles at a pulse repetition rate of 10 kHz. Interframe (IF) radio-frequency US data were simulated in Field II for 191 equally spaced longitudinal positions of the internal CA. Accumulated radial and circumferential displacements were estimated using tracking of the IF displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least-squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2-D and 3-D methods was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3-D displacement estimation for the entire cardiac cycle. The 3-D technique clearly outperformed the 2-D technique in phases with high IF longitudinal motion. In fact, the large IF longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2-D technique.

  3. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  4. SU-D-18A-06: Variation of Controlled Breath Hold From CT Simulation to Treatment and Its Dosimetric Impact for Left-Sided Breast Radiotherapy with a Real-Time Optical Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittauer, K; Deraniyagala, R; Li, J

    2014-06-01

    Purpose: Different breath-hold (BH) maneuvers (abdominal breathing vs. chest breathing) during CT simulation and treatment can lead to chest wall positional variation. The purpose of this study is to quantify the variation of active breathing control (ABC)-assisted BH and estimate its dosimetric impact for left-sided whole-breast radiotherapy with a real-time optical tracking system (OTS). Methods: Seven breast cancer patients were included. An in-house OTS tracked an infrared (IR) marker affixed over the xiphoid process of the patient at CT simulation and throughout the treatment course to measure BH variations. Correlation between the IR marker and the breast was studied formore » dosimetric purposes. The positional variations of 860 BHs were retrospectively incorporated into treatment plans to assess their dosimetric impact on breast and cardiac organs (heart and left anterior descending artery [LAD]). Results: The mean intrafraction variations were 2.8 mm, 2.7 mm, and 1.6 mm in the anteroposterior (AP), craniocaudal (CC), and mediolateral (ML) directions, respectively. Mean stability in any direction was within 1.5 mm. A general trend of BH undershoot at treatment relative to CT simulation was observed with an average of 4.4 mm, 3.6 mm, and 0.1 mm in the AP, CC, and ML directions, respectively. Undershoot up to 12.6 mm was observed for individual patients. The difference between the planned and delivered dose to breast targets was negligible. The average planned/delivered mean heart doses, mean LAD doses, and max LAD doses were 1.4/2.1, 7.4/15.7, and 18.6/31.0 Gy, respectively. Conclusion: Systematic undershoot was observed in ABC-assisted BHs from CT simulation to treatment. Its dosimetric impact on breast coverage was minimized with image guidance, but the benefits of cardiac organ sparing were degraded. A real-time tracking system can be used in junction with the ABC device to improve BH reproducibility.« less

  5. Real-time, aptamer-based tracking of circulating therapeutic agents in living animals

    PubMed Central

    Ferguson, B. Scott; Hoggarth, David A.; Maliniak, Dan; Ploense, Kyle; White, Ryan J.; Woodward, Nick; Hsieh, Kuangwen; Bonham, Andrew J.; Eisenstein, Michael; Kippin, Tod; Plaxco, Kevin W.; Soh, H. Tom

    2014-01-01

    A sensor capable of continuously measuring specific molecules in the bloodstream in vivo would give clinicians a valuable window into patients’ health and their response to therapeutics. Such technology would enable truly personalized medicine, wherein therapeutic agents could be tailored with optimal doses for each patient to maximize efficacy and minimize side effects. Unfortunately, continuous, real-time measurement is currently only possible for a handful of targets, such as glucose, lactose, and oxygen, and the few existing platforms for continuous measurement are not generalizable for the monitoring of other analytes, such as small-molecule therapeutics. In response, we have developed a real-time biosensor capable of continuously tracking a wide range of circulating drugs in living subjects. Our microfluidic electrochemical detector for in vivo continuous monitoring (MEDIC) requires no exogenous reagents, operates at room temperature, and can be reconfigured to measure different target molecules by exchanging probes in a modular manner. To demonstrate the system's versatility, we measured therapeutic in vivo concentrations of doxorubicin (a chemotherapeutic) and kanamycin (an antibiotic) in live rats and in human whole blood for several hours with high sensitivity and specificity at sub-minute temporal resolution. Importantly, we show that MEDIC can also obtain pharmacokineticparameters for individual animals in real-time. Accordingly, just as continuous glucose monitoring technology is currently revolutionizing diabetes care, we believe MEDIC could be a powerful enabler for personalized medicine by ensuring delivery of optimal drug doses for individual patients based on direct detection of physiological parameters. PMID:24285484

  6. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters

    PubMed Central

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-01-01

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046

  7. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.

    PubMed

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-09-07

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.

  8. The birth of a dinosaur footprint: Subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny

    PubMed Central

    2014-01-01

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal–substrate and substrate–substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air–substrate interface, subsurface displacements maintain a high level of organization owing to grain–grain support. Splitting the substrate volume along “virtual bedding planes” exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term “track ontogeny.” This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation. PMID:25489092

  9. Optimization of real-time rigid registration motion compensation for prostate biopsies using 2D/3D ultrasound

    NASA Astrophysics Data System (ADS)

    Gillies, Derek J.; Gardi, Lori; Zhao, Ren; Fenster, Aaron

    2017-03-01

    During image-guided prostate biopsy, needles are targeted at suspicious tissues to obtain specimens that are later examined histologically for cancer. Patient motion causes inaccuracies when using MR-transrectal ultrasound (TRUS) image fusion approaches used to augment the conventional biopsy procedure. Motion compensation using a single, user initiated correction can be performed to temporarily compensate for prostate motion, but a real-time continuous registration offers an improvement to clinical workflow by reducing user interaction and procedure time. An automatic motion compensation method, approaching the frame rate of a TRUS-guided system, has been developed for use during fusion-based prostate biopsy to improve image guidance. 2D and 3D TRUS images of a prostate phantom were registered using an intensity based algorithm utilizing normalized cross-correlation and Powell's method for optimization with user initiated and continuous registration techniques. The user initiated correction performed with observed computation times of 78 ± 35 ms, 74 ± 28 ms, and 113 ± 49 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.5 ± 0.5 mm, 1.5 ± 1.4 mm, and 1.5 ± 1.6°. The continuous correction performed significantly faster (p < 0.05) than the user initiated method, with observed computation times of 31 ± 4 ms, 32 ± 4 ms, and 31 ± 6 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.2 ± 0.2 mm, 0.6 ± 0.5 mm, and 0.8 ± 0.4°.

  10. Real-time free-viewpoint DIBR for large-size 3DLED

    NASA Astrophysics Data System (ADS)

    Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru

    2017-10-01

    Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.

  11. An interactive display system for large-scale 3D models

    NASA Astrophysics Data System (ADS)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  12. Electromagnetic tracking of flexible robotic catheters enables "assisted navigation" and brings automation to endovascular navigation in an in vitro study.

    PubMed

    Schwein, Adeline; Kramer, Benjamin; Chinnadurai, Ponraj; Virmani, Neha; Walker, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean

    2018-04-01

    Combining three-dimensional (3D) catheter control with electromagnetic (EM) tracking-based navigation significantly reduced fluoroscopy time and improved robotic catheter movement quality in a previous in vitro pilot study. The aim of this study was to expound on previous results and to expand the value of EM tracking with a novel feature, assistednavigation, allowing automatic catheter orientation and semiautomatic vessel cannulation. Eighteen users navigated a robotic catheter in an aortic aneurysm phantom using an EM guidewire and a modified 9F robotic catheter with EM sensors at the tip of both leader and sheath. All users cannulated two targets, the left renal artery and posterior gate, using four visualization modes: (1) Standard fluoroscopy (control). (2) 2D biplane fluoroscopy showing real-time virtual catheter localization and orientation from EM tracking. (3) 2D biplane fluoroscopy with novel EM assisted navigation allowing the user to define the target vessel. The robotic catheter orients itself automatically toward the target; the user then only needs to advance the guidewire following this predefined optimized path to catheterize the vessel. Then, while advancing the catheter over the wire, the assisted navigation automatically modifies catheter bending and rotation in order to ensure smooth progression, avoiding loss of wire access. (4) Virtual 3D representation of the phantom showing real-time virtual catheter localization and orientation. Standard fluoroscopy was always available; cannulation and fluoroscopy times were noted for every mode and target cannulation. Quality of catheter movement was assessed by measuring the number of submovements of the catheter using the 3D coordinates of the EM sensors. A t-test was used to compare the standard fluoroscopy mode against EM tracking modes. EM tracking significantly reduced the mean fluoroscopy time (P < .001) and the number of submovements (P < .02) for both cannulation tasks. For the posterior gate

  13. Microscopic 3D measurement of dynamic scene using optimized pulse-width-modulation binary fringe

    NASA Astrophysics Data System (ADS)

    Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao

    2017-10-01

    Microscopic 3-D shape measurement can supply accurate metrology of the delicacy and complexity of MEMS components of the final devices to ensure their proper performance. Fringe projection profilometry (FPP) has the advantages of noncontactness and high accuracy, making it widely used in 3-D measurement. Recently, tremendous advance of electronics development promotes 3-D measurements to be more accurate and faster. However, research about real-time microscopic 3-D measurement is still rarely reported. In this work, we effectively combine optimized binary structured pattern with number-theoretical phase unwrapping algorithm to realize real-time 3-D shape measurement. A slight defocusing of our proposed binary patterns can considerably alleviate the measurement error based on phase-shifting FPP, making the binary patterns have the comparable performance with ideal sinusoidal patterns. Real-time 3-D measurement about 120 frames per second (FPS) is achieved, and experimental result of a vibrating earphone is presented.

  14. Large holographic displays for real-time applications

    NASA Astrophysics Data System (ADS)

    Schwerdtner, A.; Häussler, R.; Leister, N.

    2008-02-01

    Holography is generally accepted as the ultimate approach to display three-dimensional scenes or objects. Principally, the reconstruction of an object from a perfect hologram would appear indistinguishable from viewing the corresponding real-world object. Up to now two main obstacles have prevented large-screen Computer-Generated Holograms (CGH) from achieving a satisfactory laboratory prototype not to mention a marketable one. The reason is a small cell pitch CGH resulting in a huge number of hologram cells and a very high computational load for encoding the CGH. These seemingly inevitable technological hurdles for a long time have not been cleared limiting the use of holography to special applications, such as optical filtering, interference, beam forming, digital holography for capturing the 3-D shape of objects, and others. SeeReal Technologies has developed a new approach for real-time capable CGH using the socalled Tracked Viewing Windows technology to overcome these problems. The paper will show that today's state of the art reconfigurable Spatial Light Modulators (SLM), especially today's feasible LCD panels are suited for reconstructing large 3-D scenes which can be observed from large viewing angles. For this to achieve the original holographic concept of containing information from the entire scene in each part of the CGH has been abandoned. This substantially reduces the hologram resolution and thus the computational load by several orders of magnitude making thus real-time computation possible. A monochrome real-time prototype measuring 20 inches has been built and demonstrated at last year's SID conference and exhibition 2007 and at several other events.

  15. A fiducial detection algorithm for real-time image guided IMRT based on simultaneous MV and kV imaging

    PubMed Central

    Mao, Weihua; Riaz, Nadeem; Lee, Louis; Wiersma, Rodney; Xing, Lei

    2008-01-01

    The advantage of highly conformal dose techniques such as 3DCRT and IMRT is limited by intrafraction organ motion. A new approach to gain near real-time 3D positions of internally implanted fiducial markers is to analyze simultaneous onboard kV beam and treatment MV beam images (from fluoroscopic or electronic portal image devices). Before we can use this real-time image guidance for clinical 3DCRT and IMRT treatments, four outstanding issues need to be addressed. (1) How will fiducial motion blur the image and hinder tracking fiducials? kV and MV images are acquired while the tumor is moving at various speeds. We find that a fiducial can be successfully detected at a maximum linear speed of 1.6 cm∕s. (2) How does MV beam scattering affect kV imaging? We investigate this by varying MV field size and kV source to imager distance, and find that common treatment MV beams do not hinder fiducial detection in simultaneous kV images. (3) How can one detect fiducials on images from 3DCRT and IMRT treatment beams when the MV fields are modified by a multileaf collimator (MLC)? The presented analysis is capable of segmenting a MV field from the blocking MLC and detecting visible fiducials. This enables the calculation of nearly real-time 3D positions of markers during a real treatment. (4) Is the analysis fast enough to track fiducials in nearly real time? Multiple methods are adopted to predict marker positions and reduce search regions. The average detection time per frame for three markers in a 1024×768 image was reduced to 0.1 s or less. Solving these four issues paves the way to tracking moving fiducial markers throughout a 3DCRT or IMRT treatment. Altogether, these four studies demonstrate that our algorithm can track fiducials in real time, on degraded kV images (MV scatter), in rapidly moving tumors (fiducial blurring), and even provide useful information in the case when some fiducials are blocked from view by the MLC. This technique can provide a gating signal

  16. WE-G-BRD-03: Development of a Real-Time Optical Tracking Goggle System (OTGS) for Intracranial Stereotactic Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittauer, K; Yan, G; Lu, B

    2014-06-15

    Purpose: Optical tracking systems (OTS) are an acceptable alternative to frame-based stereotactic radiotherapy (SRT). However, current surface-based OTS lack the ability to target exclusively rigid/bony anatomical features. We propose a novel marker-based optical tracking goggle system (OTGS) that provides real-time guidance based on the nose/facial bony anatomy. This ongoing study involves the development and characterization of the OTGS for clinical implementation in intracranial stereotactic radiotherapy. Methods: The OTGS consists of eye goggles, a custom thermoplastic nosepiece, and 6 infrared markers pre-attached to the goggles. A phantom and four healthy volunteers were used to evaluate the calibration/registration accuracy, intrafraction accuracy, interfractionmore » reproducibility, and end-to-end accuracy of the OTGS. The performance of the OTGS was compared with that of the frameless SonArray system and cone-beam computed tomography (CBCT) for volunteer and phantom cases, respectively. The performance of the OTGS with commercial immobilization devices and under treatment conditions (i.e., couch rotation and translation range) was also evaluated. Results: The difference in the calibration/registration accuracy of 24 translations or rotation combinations between CBCT and in-house OTS software was within 0.5 mm/0.4°. The mean intrafraction and interfraction accuracy among the volunteers was 0.004+/−0.4mm with −0.09+/−0.5° (n=6,170) and −0.26+/−0.8mm with 0.15+/0.8° (n=11), respectively. The difference in end-to-end accuracy between the OTGS and CBCT was within 1.3 mm/1.1°. The predetermined marker pattern (1) minimized marker occlusions, (2) allowed for continuous tracking for couch angles +/− 90°, (3) and eliminated individual marker misplacement. The device was feasible with open and half masks for immobilization. Conclusion: Bony anatomical localization eliminated potential errors due to facial hair changes and/or soft tissue

  17. Real-time ultrasound-guided spinal anesthesia using the SonixGPS® needle tracking system: a case report.

    PubMed

    Wong, Simon W; Niazi, Ahtsham U; Chin, Ki J; Chan, Vincent W

    2013-01-01

    The SonixGPS® is an electromagnetic needle tracking system for ultrasound-guided needle intervention. Both current and predicted needle tip position are displayed on the ultrasound screen in real-time, facilitating needle-beam alignment and guidance to the target. This case report illustrates the use of the SonixGPS system for successful performance of real-time ultrasound-guided spinal anesthesia in a patient with difficult spinal anatomy. A 67-yr-old male was admitted to our hospital to undergo revision of total right hip arthroplasty. His four previous arthroplasties for hip revision were performed under general anesthesia because he had undergone L3-L5 instrumentation for spinal stenosis. The L4-L5 interspace was viewed with the patient in the left lateral decubitus position. A 19G 80-mm proprietary needle (Ultrasonix Medical Corp, Richmond, BC, Canada) was inserted and directed through the paraspinal muscles to the ligamentum flavum in plane to the ultrasound beam. A 120-mm 25G Whitacre spinal needle was then inserted through the introducer needle in a conventional fashion. Successful dural puncture was achieved on the second attempt, as indicated by a flow of clear cerebrospinal fluid. The patient tolerated the procedure well, and the spinal anesthetic was adequate for the duration of the surgery. The SonixGPS is a novel technology that can reduce the technical difficulty of real-time ultrasound-guided neuraxial blockade. It may also have applications in other advanced ultrasound-guided regional anesthesia techniques where needle-beam alignment is critical.

  18. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, A; Matrosic, C; Zagzebski, J

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulatedmore » motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded

  19. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    PubMed

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  20. Registration of fast cine cardiac MR slices to 3D preprocedural images: toward real-time registration for MRI-guided procedures

    NASA Astrophysics Data System (ADS)

    Smolikova, Renata; Wachowiak, Mark P.; Drangova, Maria

    2004-05-01

    Interventional cardiac magnetic resonance (MR) procedures are the subject of an increasing number of research studies. Typically, during the procedure only two-dimensional images of oblique slices can be presented to the interventionalist in real time. There is a clear benefit to being able to register the real-time 2D slices to a previously acquired 3D computed tomography (CT) or MR image of the heart. Results from a study of the accuracy of registration of 2D cardiac images of an anesthetized pig to a 3D volume obtained in diastole are presented. Fast cine MR images representing twenty phases of the cardiac cycle were obtained of a 2D slice in a known oblique orientation. The 2D images were initially mis-oriented at distances ranging from 2 to 20 mm, and rotations of +/-10 degrees about all three axes. Images from all 20 cardiac phases were registered to examine the effect of timing between the 2D image and the 3D pre-procedural image. Linear registration using mutual information computed with 64 histogram bins yielded the highest accuracy. For the diastolic phases, mean translation and rotation errors ranged between 0.91 and 1.32 mm and between 1.73 and 2.10 degrees. Scans acquired at other phases also had high accuracy. These results are promising for the use of real time MR in image-guided cardiac interventions, and demonstrate the feasibility of registering 2D oblique MR slices to previously acquired single-phase volumes without preprocessing.

  1. Real-time quasi-3D tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Buurlage, Jan-Willem; Kohr, Holger; Palenstijn, Willem Jan; Joost Batenburg, K.

    2018-06-01

    Developments in acquisition technology and a growing need for time-resolved experiments pose great computational challenges in tomography. In addition, access to reconstructions in real time is a highly demanded feature but has so far been out of reach. We show that by exploiting the mathematical properties of filtered backprojection-type methods, having access to real-time reconstructions of arbitrarily oriented slices becomes feasible. Furthermore, we present , software for visualization and on-demand reconstruction of slices. A user of can interactively shift and rotate slices in a GUI, while the software updates the slice in real time. For certain use cases, the possibility to study arbitrarily oriented slices in real time directly from the measured data provides sufficient visual and quantitative insight. Two such applications are discussed in this article.

  2. Tracking a Non-Cooperative Target Using Real-Time Stereovision-Based Control: An Experimental Study.

    PubMed

    Shtark, Tomer; Gurfil, Pini

    2017-03-31

    Tracking a non-cooperative target is a challenge, because in unfamiliar environments most targets are unknown and unspecified. Stereovision is suited to deal with this issue, because it allows to passively scan large areas and estimate the relative position, velocity and shape of objects. This research is an experimental effort aimed at developing, implementing and evaluating a real-time non-cooperative target tracking methods using stereovision measurements only. A computer-vision feature detection and matching algorithm was developed in order to identify and locate the target in the captured images. Three different filters were designed for estimating the relative position and velocity, and their performance was compared. A line-of-sight control algorithm was used for the purpose of keeping the target within the field-of-view. Extensive analytical and numerical investigations were conducted on the multi-view stereo projection equations and their solutions, which were used to initialize the different filters. This research shows, using an experimental and numerical evaluation, the benefits of using the unscented Kalman filter and the total least squares technique in the stereovision-based tracking problem. These findings offer a general and more accurate method for solving the static and dynamic stereovision triangulation problems and the concomitant line-of-sight control.

  3. Tracking a Non-Cooperative Target Using Real-Time Stereovision-Based Control: An Experimental Study

    PubMed Central

    Shtark, Tomer; Gurfil, Pini

    2017-01-01

    Tracking a non-cooperative target is a challenge, because in unfamiliar environments most targets are unknown and unspecified. Stereovision is suited to deal with this issue, because it allows to passively scan large areas and estimate the relative position, velocity and shape of objects. This research is an experimental effort aimed at developing, implementing and evaluating a real-time non-cooperative target tracking methods using stereovision measurements only. A computer-vision feature detection and matching algorithm was developed in order to identify and locate the target in the captured images. Three different filters were designed for estimating the relative position and velocity, and their performance was compared. A line-of-sight control algorithm was used for the purpose of keeping the target within the field-of-view. Extensive analytical and numerical investigations were conducted on the multi-view stereo projection equations and their solutions, which were used to initialize the different filters. This research shows, using an experimental and numerical evaluation, the benefits of using the unscented Kalman filter and the total least squares technique in the stereovision-based tracking problem. These findings offer a general and more accurate method for solving the static and dynamic stereovision triangulation problems and the concomitant line-of-sight control. PMID:28362338

  4. Quality assurance for clinical implementation of an electromagnetic tracking system.

    PubMed

    Santanam, Lakshmi; Noel, Camille; Willoughby, Twyla R; Esthappan, Jacqueline; Mutic, Sasa; Klein, Eric E; Low, Daniel A; Parikh, Parag J

    2009-08-01

    The Calypso Medical 4D localization system utilizes alternating current electromagnetics for accurate, real-time tumor tracking. A quality assurance program to clinically implement this system is described here. Testing of the continuous electromagnetic tracking system (Calypso Medical Technologies, Seattle, WA) was performed using an in-house developed four-dimensional stage and a quality assurance fixture containing three radiofrequency transponders at independently measured locations. The following tests were performed to validate the Calypso system: (a) Localization and tracking accuracy, (b) system reproducibility, (c) measurement of the latency of the tracking system, and (d) measurement of transmission through the Calypso table overlay and the electromagnetic array. The translational and rotational localization accuracies were found to be within 0.01 cm and 1.0 degree, respectively. The reproducibility was within 0.1 cm. The average system latency was measured to be within 303 ms. The attenuation by the Calypso overlay was measured to be 1.0% for both 6 and 18 MV photons. The attenuations by the Calypso array were measured to be 2% and 1.5% for 6 and 18 MV photons, respectively. For oblique angles, the transmission was measured to be 3% for 6 MV, while it was 2% for 18 MV photons. A quality assurance process has been developed for the clinical implementation of an electromagnetic tracking system in radiation therapy.

  5. Real-Time Two-Dimensional Magnetic Particle Imaging for Electromagnetic Navigation in Targeted Drug Delivery.

    PubMed

    Le, Tuan-Anh; Zhang, Xingming; Hoshiar, Ali Kafash; Yoon, Jungwon

    2017-09-07

    Magnetic nanoparticles (MNPs) are effective drug carriers. By using electromagnetic actuated systems, MNPs can be controlled noninvasively in a vascular network for targeted drug delivery (TDD). Although drugs can reach their target location through capturing schemes of MNPs by permanent magnets, drugs delivered to non-target regions can affect healthy tissues and cause undesirable side effects. Real-time monitoring of MNPs can improve the targeting efficiency of TDD systems. In this paper, a two-dimensional (2D) real-time monitoring scheme has been developed for an MNP guidance system. Resovist particles 45 to 65 nm in diameter (5 nm core) can be monitored in real-time (update rate = 2 Hz) in 2D. The proposed 2D monitoring system allows dynamic tracking of MNPs during TDD and renders magnetic particle imaging-based navigation more feasible.

  6. GPS-Based Navigation and Orbit Determination for the AMSAT Phase 3D Satellite

    NASA Technical Reports Server (NTRS)

    Davis, George; Carpenter, Russell; Moreau, Michael; Bauer, Frank H.; Long, Anne; Kelbel, David; Martin, Thomas

    2002-01-01

    This paper summarizes the results of processing GPS data from the AMSAT Phase 3D (AP3) satellite for real-time navigation and post-processed orbit determination experiments. AP3 was launched into a geostationary transfer orbit (GTO) on November 16, 2000 from Kourou, French Guiana, and then was maneuvered into its HEO over the next several months. It carries two Trimble TANS Vector GPS receivers for signal reception at apogee and at perigee. Its spin stabilization mode currently makes it favorable to track GPS satellites from the backside of the constellation while at perigee, and to track GPS satellites from below while at perigee. To date, the experiment has demonstrated that it is feasible to use GPS for navigation and orbit determination in HEO, which will be of great benefit to planned and proposed missions that will utilize such orbits for science observations. It has also shown that there are many important operational considerations to take into account. For example, GPS signals can be tracked above the constellation at altitudes as high as 58000 km, but sufficient amplification of those weak signals is needed. Moreover, GPS receivers can track up to 4 GPS satellites at perigee while moving as fast as 9.8 km/sec, but unless the receiver can maintain lock on the signals long enough, point solutions will be difficult to generate. The spin stabilization of AP3, for example, appears to cause signal levels to fluctuate as other antennas on the satellite block the signals. As a result, its TANS Vectors have been unable to lock on to the GPS signals long enough to down load the broadcast ephemeris and then generate position and velocity solutions. AP3 is currently in its eclipse season, and thus most of the spacecraft subsystems have been powered off. In Spring 2002, they will again be powered up and AP3 will be placed into a three-axis stabilization mode. This will significantly enhance the likelihood that point solutions can be generated, and perhaps more

  7. D3D augmented reality imaging system: proof of concept in mammography.

    PubMed

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called "depth 3-dimensional (D3D) augmented reality". A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice.

  8. A real-time tracking system of infrared dim and small target based on FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Rong, Sheng-hui; Zhou, Hui-xin; Qin, Han-lin; Wang, Bing-jian; Qian, Kun

    2014-11-01

    A core technology in the infrared warning system is the detection tracking of dim and small targets with complicated background. Consequently, running the detection algorithm on the hardware platform has highly practical value in the military field. In this paper, a real-time detection tracking system of infrared dim and small target which is used FPGA (Field Programmable Gate Array) and DSP (Digital Signal Processor) as the core was designed and the corresponding detection tracking algorithm and the signal flow is elaborated. At the first stage, the FPGA obtain the infrared image sequence from the sensor, then it suppresses background clutter by mathematical morphology method and enhances the target intensity by Laplacian of Gaussian operator. At the second stage, the DSP obtain both the original image and the filtered image form the FPGA via the video port. Then it segments the target from the filtered image by an adaptive threshold segmentation method and gets rid of false target by pipeline filter. Experimental results show that our system can achieve higher detection rate and lower false alarm rate.

  9. The ultrasound brain helmet: feasibility study of multiple simultaneous 3D scans of cerebral vasculature.

    PubMed

    Smith, Stephen W; Ivancevich, Nikolas M; Lindsey, Brooks D; Whitman, John; Light, Edward; Fronheiser, Matthew; Nicoletto, Heather A; Laskowitz, Daniel T

    2009-02-01

    We describe early stage experiments to test the feasibility of an ultrasound brain helmet to produce multiple simultaneous real-time three-dimensional (3D) scans of the cerebral vasculature from temporal and suboccipital acoustic windows of the skull. The transducer hardware and software of the Volumetrics Medical Imaging (Durham, NC, USA) real-time 3D scanner were modified to support dual 2.5 MHz matrix arrays of 256 transmit elements and 128 receive elements which produce two simultaneous 64 degrees pyramidal scans. The real-time display format consists of two coronal B-mode images merged into a 128 degrees sector, two simultaneous parasagittal images merged into a 128 degrees x 64 degrees C-mode plane and a simultaneous 64 degrees axial image. Real-time 3D color Doppler scans from a skull phantom with latex blood vessel were obtained after contrast agent injection as a proof of concept. The long-term goal is to produce real-time 3D ultrasound images of the cerebral vasculature from a portable unit capable of internet transmission thus enabling interactive 3D imaging, remote diagnosis and earlier therapeutic intervention. We are motivated by the urgency for rapid diagnosis of stroke due to the short time window of effective therapeutic intervention.

  10. Hardware accelerator design for tracking in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  11. A real-time 3D range image sensor based on a novel tip-tilt-piston micromirror and dual frequency phase shifting

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Schumann-Olsen, Henrik; Thorstensen, Jostein; Kim, Anna N.; Lacolle, Matthieu; Haugholt, Karl-Henrik; Bakke, Thor

    2015-03-01

    Structured light is a robust and accurate method for 3D range imaging in which one or more light patterns are projected onto the scene and observed with an off-axis camera. Commercial sensors typically utilize DMD- or LCD-based LED projectors, which produce good results but have a number of drawbacks, e.g. limited speed, limited depth of focus, large sensitivity to ambient light and somewhat low light efficiency. We present a 3D imaging system based on a laser light source and a novel tip-tilt-piston micro-mirror. Optical interference is utilized to create sinusoidal fringe patterns. The setup allows fast and easy control of both the frequency and the phase of the fringe patterns by altering the axes of the micro-mirror. For 3D reconstruction we have adapted a Dual Frequency Phase Shifting method which gives robust range measurements with sub-millimeter accuracy. The use of interference for generating sine patterns provides high light efficiency and good focusing properties. The use of a laser and a bandpass filter allows easy removal of ambient light. The fast response of the micro-mirror in combination with a high-speed camera and real-time processing on the GPU allows highly accurate 3D range image acquisition at video rates.

  12. Real-Time Feature Tracking Using Homography

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel S.; Cheng, Yang; Ansar, Adnan I.; Trotz, David C.; Padgett, Curtis W.

    2010-01-01

    This software finds feature point correspondences in sequences of images. It is designed for feature matching in aerial imagery. Feature matching is a fundamental step in a number of important image processing operations: calibrating the cameras in a camera array, stabilizing images in aerial movies, geo-registration of images, and generating high-fidelity surface maps from aerial movies. The method uses a Shi-Tomasi corner detector and normalized cross-correlation. This process is likely to result in the production of some mismatches. The feature set is cleaned up using the assumption that there is a large planar patch visible in both images. At high altitude, this assumption is often reasonable. A mathematical transformation, called an homography, is developed that allows us to predict the position in image 2 of any point on the plane in image 1. Any feature pair that is inconsistent with the homography is thrown out. The output of the process is a set of feature pairs, and the homography. The algorithms in this innovation are well known, but the new implementation improves the process in several ways. It runs in real-time at 2 Hz on 64-megapixel imagery. The new Shi-Tomasi corner detector tries to produce the requested number of features by automatically adjusting the minimum distance between found features. The homography-finding code now uses an implementation of the RANSAC algorithm that adjusts the number of iterations automatically to achieve a pre-set probability of missing a set of inliers. The new interface allows the caller to pass in a set of predetermined points in one of the images. This allows the ability to track the same set of points through multiple frames.

  13. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  14. Adaptive radiation therapy for postprostatectomy patients using real-time electromagnetic target motion tracking during external beam radiation therapy.

    PubMed

    Zhu, Mingyao; Bharat, Shyam; Michalski, Jeff M; Gay, Hiram A; Hou, Wei-Hsien; Parikh, Parag J

    2013-03-15

    Using real-time electromagnetic (EM) transponder tracking data recorded by the Calypso 4D Localization System, we report inter- and intrafractional target motion of the prostate bed, describe a strategy to evaluate treatment adequacy in postprostatectomy patients receiving intensity modulated radiation therapy (IMRT), and propose an adaptive workflow. Tracking data recorded by Calypso EM transponders was analyzed for postprostatectomy patients that underwent step-and-shoot IMRT. Rigid target motion parameters during beam delivery were calculated from recorded transponder positions in 16 patients with rigid transponder geometry. The delivered doses to the clinical target volume (CTV) were estimated from the planned dose matrix and the target motion for the first 3, 5, 10, and all fractions. Treatment adequacy was determined by comparing the delivered minimum dose (Dmin) with the planned Dmin to the CTV. Treatments were considered adequate if the delivered CTV Dmin is at least 95% of the planned CTV Dmin. Translational target motion was minimal for all 16 patients (mean: 0.02 cm; range: -0.12 cm to 0.07 cm). Rotational motion was patient-specific, and maximum pitch, yaw, and roll were 12.2, 4.1, and 10.5°, respectively. We observed inadequate treatments in 5 patients. In these treatments, we observed greater target rotations along with large distances between the CTV centroid and transponder centroid. The treatment adequacy from the initial 10 fractions successfully predicted the overall adequacy in 4 of 5 inadequate treatments and 10 of 11 adequate treatments. Target rotational motion could cause underdosage to partial volume of the postprostatectomy targets. Our adaptive treatment strategy is applicable to post-prostatectomy patients receiving IMRT to evaluate and improve radiation therapy delivery. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. A GPU-accelerated 3D Coupled Sub-sample Estimation Algorithm for Volumetric Breast Strain Elastography

    PubMed Central

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-01-01

    Our primary objective of this work was to extend a previously published 2D coupled sub-sample tracking algorithm for 3D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3D coupled sub-sample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking (TM) phantom and in vivo breast ultrasound data. The performance of this 3D sub-sample tracking algorithm was compared with the conventional 3D quadratic sub-sample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3D sub-sample estimation algorithm can provide high-quality strain data (i.e. high correlation between the pre- and the motion-compensated post-deformation RF echo data and high contrast-to-noise ratio strain images), as compared to the conventional 3D quadratic sub-sample algorithm. Using the GPU implementation of the 3D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 seconds per volume [2.5 cm × 2.5 cm × 2.5 cm]). PMID:28166493

  16. 3D ultrafast ultrasound imaging in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-07

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability.

  17. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  18. 3D and 4D echo--applications in EP laboratory procedures.

    PubMed

    Kautzner, Josef; Peichl, Petr

    2008-08-01

    3D echocardiography allows imaging and analysis of cardiovascular structures as they move in time and space, thus creating possibility for creation of 4D datasets (3D + time). Intracardiac echocardiography (ICE) further broadens the spectrum of echocardiographic techniques by allowing detailed imaging of intracardiac anatomy with 3D reconstructions. The paper reviews the current status of development of 3D and 4D echocardiography in electrophysiology. In ablation area, 3D echocardiography can enhance the performance of catheter ablation for complex arrhythmias such as atrial fibrillation. Currently, several strategies to obtain 3D reconstructions from ICE are available. One involves combination with electroanatomical mapping system; others create reconstruction from standard phased-array or single-element ICE catheter using special rotational or pull-back devices. Secondly, 3D echocardiography may be used for precise assessment of cardiac dyssynchrony before cardiac resynchronization therapy. Its reliable detection is expected to minimize number of non-responders to this treatment and optimize left ventricular lead positioning to get maximum hemodynamic benefit. The main potential benefit of 3D and 4D echocardiography in electrophysiology lie in real-time guidance of complex ablation procedures and precise assessment of cardiac dyssynchrony.

  19. Compressed multi-block local binary pattern for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao

    2018-04-01

    Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.

  20. Real-time heart rate measurement for multi-people using compressive tracking

    NASA Astrophysics Data System (ADS)

    Liu, Lingling; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Dong, Liquan; Ma, Feilong; Pang, Zongguang; Cai, Zhi; Zhang, Yachu; Hua, Peng; Yuan, Ruifeng

    2017-09-01

    The rise of aging population has created a demand for inexpensive, unobtrusive, automated health care solutions. Image PhotoPlethysmoGraphy(IPPG) aids in the development of these solutions by allowing for the extraction of physiological signals from video data. However, the main deficiencies of the recent IPPG methods are non-automated, non-real-time and susceptible to motion artifacts(MA). In this paper, a real-time heart rate(HR) detection method for multiple subjects simultaneously was proposed and realized using the open computer vision(openCV) library, which consists of getting multiple subjects' facial video automatically through a Webcam, detecting the region of interest (ROI) in the video, reducing the false detection rate by our improved Adaboost algorithm, reducing the MA by our improved compress tracking(CT) algorithm, wavelet noise-suppression algorithm for denoising and multi-threads for higher detection speed. For comparison, HR was measured simultaneously using a medical pulse oximetry device for every subject during all sessions. Experimental results on a data set of 30 subjects show that the max average absolute error of heart rate estimation is less than 8 beats per minute (BPM), and the processing speed of every frame has almost reached real-time: the experiments with video recordings of ten subjects under the condition of the pixel resolution of 600× 800 pixels show that the average HR detection time of 10 subjects was about 17 frames per second (fps).