Science.gov

Sample records for realtime 3d tracking

  1. Parameterization of real-time 3D speckle tracking framework for cardiac strain assessment.

    PubMed

    Lorsakul, Auranuch; Duan, Qi; Po, Ming Jack; Angelini, Elsa; Homma, Shunichi; Laine, Andrew F

    2011-01-01

    Cross-correlation based 3D speckle tracking algorithm can be used to automatically track myocardial motion on three dimensional real-time (RT3D) echocardiography. The goal of this study was to experimentally investigate the effects of different parameters associated with such algorithm to ensure accurate cardiac strain measurements. The investigation was performed on 10 chronic obstructive pulmonary disease RT3DE cardiac ultrasound images. The following two parameters were investigated: 1) the gradient threshold of the anisotropic diffusion pre-filtering and 2) the window size of the cross correlation template matching in the speckle tracking. Results suggest that the optimal gradient threshold of the anisotropic filter depends on the average gradient of the background speckle noise, and that an optimal pair of template size and search window size can be identified determines the cross-correlation level and computational cost. PMID:22254887

  2. Real-time tracking with a 3D-Flow processor array

    SciTech Connect

    Crosetto, D.

    1993-06-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was thought to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc., with respect to the CAM approach. The report describes real-time track finding using new computing approach technique based on the 3D-Flow array processor system. This system consists of a fixed interconnection architecture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project.

  3. Automatic alignment of standard views in 3D echocardiograms using real-time tracking

    NASA Astrophysics Data System (ADS)

    Orderud, Fredrik; Torp, Hans; Rabben, Stein Inge

    2009-02-01

    In this paper, we present an automatic approach for alignment of standard apical and short-axis slices, and correcting them for out-of-plane motion in 3D echocardiography. This is enabled by using real-time Kalman tracking to perform automatic left ventricle segmentation using a coupled deformable model, consisting of a left ventricle model, as well as structures for the right ventricle and left ventricle outflow tract. Landmark points from the segmented model are then used to generate standard apical and short-axis slices. The slices are automatically updated after tracking in each frame to correct for out-of-plane motion caused by longitudinal shortening of the left ventricle. Results from a dataset of 35 recordings demonstrate the potential for automating apical slice initialization and dynamic short-axis slices. Apical 4-chamber, 2-chamber and long-axis slices are generated based on an assumption of fixed angle between the slices, and short-axis slices are generated so that they follow the same myocardial tissue over the entire cardiac cycle. The error compared to manual annotation was 8.4 +/- 3.5 mm for apex, 3.6 +/- 1.8 mm for mitral valve and 8.4 +/- 7.4 for apical 4-chamber view. The high computational efficiency and automatic behavior of the method enables it to operate in real-time, potentially during image acquisition.

  4. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  5. Real-time visual sensing system achieving high-speed 3D particle tracking with nanometer resolution.

    PubMed

    Cheng, Peng; Jhiang, Sissy M; Menq, Chia-Hsiang

    2013-11-01

    This paper presents a real-time visual sensing system, which is created to achieve high-speed three-dimensional (3D) motion tracking of microscopic spherical particles in aqueous solutions with nanometer resolution. The system comprises a complementary metal-oxide-semiconductor (CMOS) camera, a field programmable gate array (FPGA), and real-time image processing programs. The CMOS camera has high photosensitivity and superior SNR. It acquires images of 128×120 pixels at a frame rate of up to 10,000 frames per second (fps) under the white light illumination from a standard 100 W halogen lamp. The real-time image stream is downloaded from the camera directly to the FPGA, wherein a 3D particle-tracking algorithm is implemented to calculate the 3D positions of the target particle in real time. Two important objectives, i.e., real-time estimation of the 3D position matches the maximum frame rate of the camera and the timing of the output data stream of the system is precisely controlled, are achieved. Two sets of experiments were conducted to demonstrate the performance of the system. First, the visual sensing system was used to track the motion of a 2 μm polystyrene bead, whose motion was controlled by a three-axis piezo motion stage. The ability to track long-range motion with nanometer resolution in all three axes is demonstrated. Second, it was used to measure the Brownian motion of the 2 μm polystyrene bead, which was stabilized in aqueous solution by a laser trapping system. PMID:24216655

  6. Optimal transcostal high-intensity focused ultrasound with combined real-time 3D movement tracking and correction

    NASA Astrophysics Data System (ADS)

    Marquet, F.; Aubry, J. F.; Pernot, M.; Fink, M.; Tanter, M.

    2011-11-01

    Recent studies have demonstrated the feasibility of transcostal high intensity focused ultrasound (HIFU) treatment in liver. However, two factors limit thermal necrosis of the liver through the ribs: the energy deposition at focus is decreased by the respiratory movement of the liver and the energy deposition on the skin is increased by the presence of highly absorbing bone structures. Ex vivo ablations were conducted to validate the feasibility of a transcostal real-time 3D movement tracking and correction mode. Experiments were conducted through a chest phantom made of three human ribs immersed in water and were placed in front of a 300 element array working at 1 MHz. A binarized apodization law introduced recently in order to spare the rib cage during treatment has been extended here with real-time electronic steering of the beam. Thermal simulations have been conducted to determine the steering limits. In vivo 3D-movement detection was performed on pigs using an ultrasonic sequence. The maximum error on the transcostal motion detection was measured to be 0.09 ± 0.097 mm on the anterior-posterior axis. Finally, a complete sequence was developed combining real-time 3D transcostal movement correction and spiral trajectory of the HIFU beam, allowing the system to treat larger areas with optimized efficiency. Lesions as large as 1 cm in diameter have been produced at focus in excised liver, whereas no necroses could be obtained with the same emitted power without correcting the movement of the tissue sample.

  7. Real-time estimation of FLE statistics for 3-D tracking with point-based registration.

    PubMed

    Wiles, Andrew D; Peters, Terry M

    2009-09-01

    Target registration error (TRE) has become a widely accepted error metric in point-based registration since the error metric was introduced in the 1990s. It is particularly prominent in image-guided surgery (IGS) applications where point-based registration is used in both image registration and optical tracking. In point-based registration, the TRE is a function of the fiducial marker geometry, location of the target and the fiducial localizer error (FLE). While the first two items are easily obtained, the FLE is usually estimated using an a priori technique and applied without any knowledge of real-time information. However, if the FLE can be estimated in real-time, particularly as it pertains to optical tracking, then the TRE can be estimated more robustly. In this paper, a method is presented where the FLE statistics are estimated from the latest measurement of the fiducial registration error (FRE) statistics. The solution is obtained by solving a linear system of equations of the form Ax=b for each marker at each time frame where x are the six independent FLE covariance parameters and b are the six independent estimated FRE covariance parameters. The A matrix is only a function of the tool geometry and hence the inverse of the matrix can be computed a priori and used at each instant in which the FLE estimation is required, hence minimizing the level of computation at each frame. When using a good estimate of the FRE statistics, Monte Carlo simulations demonstrate that the root mean square of the FLE can be computed within a range of 70-90 microm. Robust estimation of the TRE for an optically tracked tool, using a good estimate of the FLE, will provide two enhancements in IGS. First, better patient to image registration will be obtained by using the TRE of the optical tool as a weighting factor of point-based registration used to map the patient to image space. Second, the directionality of the TRE can be relayed back to the surgeon giving the surgeon the option

  8. Four-directional stereo-microscopy for 3D particle tracking with real-time error evaluation.

    PubMed

    Hay, R F; Gibson, G M; Lee, M P; Padgett, M J; Phillips, D B

    2014-07-28

    High-speed video stereo-microscopy relies on illumination from two distinct angles to create two views of a sample from different directions. The 3D trajectory of a microscopic object can then be reconstructed using parallax to combine 2D measurements of its position in each image. In this work, we evaluate the accuracy of 3D particle tracking using this technique, by extending the number of views from two to four directions. This allows us to record two independent sets of measurements of the 3D coordinates of tracked objects, and comparison of these enables measurement and minimisation of the tracking error in all dimensions. We demonstrate the method by tracking the motion of an optically trapped microsphere of 5 μm in diameter, and find an accuracy of 2-5 nm laterally, and 5-10 nm axially, representing a relative error of less than 2.5% of its range of motion in each dimension. PMID:25089484

  9. Performance and suitability assessment of a real-time 3D electromagnetic needle tracking system for interstitial brachytherapy

    PubMed Central

    Boutaleb, Samir; Fillion, Olivier; Bonillas, Antonio; Hautvast, Gilion; Binnekamp, Dirk; Beaulieu, Luc

    2015-01-01

    Purpose Accurate insertion and overall needle positioning are key requirements for effective brachytherapy treatments. This work aims at demonstrating the accuracy performance and the suitability of the Aurora® V1 Planar Field Generator (PFG) electromagnetic tracking system (EMTS) for real-time treatment assistance in interstitial brachytherapy procedures. Material and methods The system's performance was characterized in two distinct studies. First, in an environment free of EM disturbance, the boundaries of the detection volume of the EMTS were characterized and a tracking error analysis was performed. Secondly, a distortion analysis was conducted as a means of assessing the tracking accuracy performance of the system in the presence of potential EM disturbance generated by the proximity of standard brachytherapy components. Results The tracking accuracy experiments showed that positional errors were typically 2 ± 1 mm in a zone restricted to the first 30 cm of the detection volume. However, at the edges of the detection volume, sensor position errors of up to 16 mm were recorded. On the other hand, orientation errors remained low at ± 2° for most of the measurements. The EM distortion analysis showed that the presence of typical brachytherapy components in vicinity of the EMTS had little influence on tracking accuracy. Position errors of less than 1 mm were recorded with all components except with a metallic arm support, which induced a mean absolute error of approximately 1.4 mm when located 10 cm away from the needle sensor. Conclusions The Aurora® V1 PFG EMTS possesses a great potential for real-time treatment assistance in general interstitial brachytherapy. In view of our experimental results, we however recommend that the needle axis remains as parallel as possible to the generator surface during treatment and that the tracking zone be restricted to the first 30 cm from the generator surface. PMID:26622231

  10. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  11. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  12. Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry

    2016-03-01

    In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.

  13. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions

    NASA Astrophysics Data System (ADS)

    Wiersma, R. D.; Riaz, N.; Dieterich, Sonja; Suh, Yelin; Xing, L.

    2009-01-01

    The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have <=1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness of

  14. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  15. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.

    2008-12-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time

  16. Three-Dimensional Rotation, Twist and Torsion Analyses Using Real-Time 3D Speckle Tracking Imaging: Feasibility, Reproducibility, and Normal Ranges in Pediatric Population

    PubMed Central

    Han, Wei; Gao, Jun; He, Lin; Yang, Yali; Yin, Ping; Xie, Mingxing; Ge, Shuping

    2016-01-01

    Background and Objective The specific aim of this study was to evaluate the feasibility, reproducibility and maturational changes of LV rotation, twist and torsion variables by real-time 3D speckle-tracking echocardiography (RT3DSTE) in children. Methods A prospective study was conducted in 347 consecutive healthy subjects (181 males/156 females, mean age 7.12 ± 5.3 years, and range from birth to 18-years) using RT 3D echocardiography (3DE). The LV rotation, twist and torsion measurements were made off-line using TomTec software. Manual landmark selection and endocardial border editing were performed in 3 planes (apical “2”-, “4”-, and “3”- chamber views) and semi-automated tracking yielded LV rotation, twist and torsion measurements. LV rotation, twist and torsion analysis by RT 3DSTE were feasible in 307 out of 347 subjects (88.5%). Results There was no correlation between rotation or twist and age, height, weight, BSA or heart rate, respectively. However, there was statistically significant, but very modest correlation between LV torsion and age (R2 = 0.036, P< 0.001). The normal ranges were defined for rotation and twist in this cohort, and for torsion for each age group. The intra-observer and inter-observer variabilities for apical and basal rotation, twist and torsion ranged from 7.3% ± 3.8% to 12.3% ± 8.8% and from 8.8% ± 4.6% to 15.7% ± 10.1%, respectively. Conclusions We conclude that analysis of LV rotation, twist and torsion by this new RT3D STE is feasible and reproducible in pediatric population. There is no maturational change in rotation and twist, but torsion decreases with age in this cohort. Further refinement is warranted to validate the utility of this new methodology in more sensitive and quantitative evaluation of congenital and acquired heart diseases in children. PMID:27427968

  17. Automatic respiration tracking for radiotherapy using optical 3D camera

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  18. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2012-08-29

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  19. Light driven micro-robotics with holographic 3D tracking

    NASA Astrophysics Data System (ADS)

    Glückstad, Jesper

    2016-04-01

    We recently pioneered the concept of light-driven micro-robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically trapped and "remote-controlled" in a volume with six-degrees-of-freedom. To be exploring the full potential of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of "light robots" in 3D to ensure continuous optimal light coupling on the fly. Our latest developments in this new and exciting area will be reviewed in this invited paper.

  20. 3D hand tracking using Kalman filter in depth space

    NASA Astrophysics Data System (ADS)

    Park, Sangheon; Yu, Sunjin; Kim, Joongrock; Kim, Sungjin; Lee, Sangyoun

    2012-12-01

    Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method.

  1. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  2. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  3. Real-time monitoring of 3D cell culture using a 3D capacitance biosensor.

    PubMed

    Lee, Sun-Mi; Han, Nalae; Lee, Rimi; Choi, In-Hong; Park, Yong-Beom; Shin, Jeon-Soo; Yoo, Kyung-Hwa

    2016-03-15

    Three-dimensional (3D) cell cultures have recently received attention because they represent a more physiologically relevant environment compared to conventional two-dimensional (2D) cell cultures. However, 2D-based imaging techniques or cell sensors are insufficient for real-time monitoring of cellular behavior in 3D cell culture. Here, we report investigations conducted with a 3D capacitance cell sensor consisting of vertically aligned pairs of electrodes. When GFP-expressing human breast cancer cells (GFP-MCF-7) encapsulated in alginate hydrogel were cultured in a 3D cell culture system, cellular activities, such as cell proliferation and apoptosis at different heights, could be monitored non-invasively and in real-time by measuring the change in capacitance with the 3D capacitance sensor. Moreover, we were able to monitor cell migration of human mesenchymal stem cells (hMSCs) with our 3D capacitance sensor. PMID:26386332

  4. Real-time catheter tracking for high-dose-rate prostate brachytherapy using an electromagnetic 3D-guidance device: A preliminary performance study

    SciTech Connect

    Zhou Jun; Sebastian, Evelyn; Mangona, Victor; Yan Di

    2013-02-15

    Purpose: In order to increase the accuracy and speed of catheter reconstruction in a high-dose-rate (HDR) prostate implant procedure, an automatic tracking system has been developed using an electromagnetic (EM) device (trakSTAR, Ascension Technology, VT). The performance of the system, including the accuracy and noise level with various tracking parameters and conditions, were investigated. Methods: A direct current (dc) EM transmitter (midrange model) and a sensor with diameter of 1.3 mm (Model 130) were used in the trakSTAR system for tracking catheter position during HDR prostate brachytherapy. Localization accuracy was assessed under both static and dynamic analyses conditions. For the static analysis, a calibration phantom was used to investigate error dependency on operating room (OR) table height (bottom vs midposition vs top), sensor position (distal tip of catheter vs connector end of catheter), direction [left-right (LR) vs anterior-posterior (AP) vs superior-inferior (SI)], sampling frequency (40 vs 80 vs 120 Hz), and interference from OR equipment (present vs absent). The mean and standard deviation of the localization offset in each direction and the corresponding error vectors were calculated. For dynamic analysis, the paths of five straight catheters were tracked to study the effects of directions, sampling frequency, and interference of EM field. Statistical analysis was conducted to compare the results in different configurations. Results: When interference was present in the static analysis, the error vectors were significantly higher at the top table position (3.3 {+-} 1.3 vs 1.8 {+-} 0.9 mm at bottom and 1.7 {+-} 1.0 mm at middle, p < 0.001), at catheter end position (3.1 {+-} 1.1 vs 1.4 {+-} 0.7 mm at the tip position, p < 0.001), and at 40 Hz sampling frequency (2.6 {+-} 1.1 vs 2.4 {+-} 1.5 mm at 80 Hz and 1.8 {+-} 1.1 at 160 Hz, p < 0.001). So did the mean offset errors in the LR direction (-1.7 {+-} 1.4 vs 0.4 {+-} 0.5 mm in AP and 0

  5. Electrically tunable lens speeds up 3D orbital tracking

    PubMed Central

    Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico

    2015-01-01

    3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037

  6. Track and trap in 3D

    NASA Astrophysics Data System (ADS)

    Glückstad, Jesper; Rodrigo, Peter J.; Nielsen, Ivan P.; Alonzo, Carlo A.

    2007-04-01

    Three-dimensional light structures can be created by modulating the spatial phase and polarization properties of an an expanded laser beam. A particularly promising technique is the Generalized Phase Contrast (GPC) method invented and patented at Risø National Laboratory. Based on the combination of programmable spatial light modulator devices and an advanced graphical user-interface the GPC method enables real-time, interactive and arbitrary control over the dynamics and geometry of synthesized light patterns. Recent experiments have shown that GPC-driven micro-manipulation provides a unique technology platform for fully user-guided assembly of a plurality of particles in a plane, control of particle stacking along the beam axis, manipulation of multiple hollow beads, and the organization of living cells into three-dimensional colloidal structures. Here we present GPC-based optical micromanipulation in a microfluidic system where trapping experiments are computer-automated and thereby capable of running with only limited supervision. The system is able to dynamically detect living yeast cells using a computer-interfaced CCD camera, and respond to this by instantly creating traps at positions of the spotted cells streaming at flow velocities that would be difficult for a human operator to handle.

  7. Real-time structured light intraoral 3D measurement pipeline

    NASA Astrophysics Data System (ADS)

    Gheorghe, Radu; Tchouprakov, Andrei; Sokolov, Roman

    2013-02-01

    Computer aided design and manufacturing (CAD/CAM) is increasingly becoming a standard feature and service provided to patients in dentist offices and denture manufacturing laboratories. Although the quality of the tools and data has slowly improved in the last years, due to various surface measurement challenges, practical, accurate, invivo, real-time 3D high quality data acquisition and processing still needs improving. Advances in GPU computational power have allowed for achieving near real-time 3D intraoral in-vivo scanning of patient's teeth. We explore in this paper, from a real-time perspective, a hardware-software-GPU solution that addresses all the requirements mentioned before. Moreover we exemplify and quantify the hard and soft deadlines required by such a system and illustrate how they are supported in our implementation.

  8. Multiview 3-D Echocardiography Fusion with Breath-Hold Position Tracking Using an Optical Tracking System.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; McNulty, Alexander; Biamonte, Marina; He, Allen; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Recent advances in echocardiography allow real-time 3-D dynamic image acquisition of the heart. However, one of the major limitations of 3-D echocardiography is the limited field of view, which results in an acquisition insufficient to cover the whole geometry of the heart. This study proposes the novel approach of fusing multiple 3-D echocardiography images using an optical tracking system that incorporates breath-hold position tracking to infer that the heart remains at the same position during different acquisitions. In six healthy male volunteers, 18 pairs of apical/parasternal 3-D ultrasound data sets were acquired during a single breath-hold as well as in subsequent breath-holds. The proposed method yielded a field of view improvement of 35.4 ± 12.5%. To improve the quality of the fused image, a wavelet-based fusion algorithm was developed that computes pixelwise likelihood values for overlapping voxels from multiple image views. The proposed wavelet-based fusion approach yielded significant improvement in contrast (66.46 ± 21.68%), contrast-to-noise ratio (49.92 ± 28.71%), signal-to-noise ratio (57.59 ± 47.85%) and feature count (13.06 ± 7.44%) in comparison to individual views. PMID:27166019

  9. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  10. A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.

    PubMed

    Mung, Jay; Vignon, Francois; Jain, Ameet

    2011-01-01

    In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact. PMID:22003612

  11. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  12. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset. PMID:23955796

  13. Real-time rendering method and performance evaluation of composable 3D lenses for interactive VR.

    PubMed

    Borst, Christoph W; Tiesel, Jan-Phillip; Best, Christopher M

    2010-01-01

    We present and evaluate a new approach for real-time rendering of composable 3D lenses for polygonal scenes. Such lenses, usually called "volumetric lenses," are an extension of 2D Magic Lenses to 3D volumes in which effects are applied to scene elements. Although the composition of 2D lenses is well known, 3D composition was long considered infeasible due to both geometric and semantic complexity. Nonetheless, for a scene with multiple interactive 3D lenses, the problem of intersecting lenses must be considered. Intersecting 3D lenses in meaningful ways supports new interfaces such as hierarchical 3D windows, 3D lenses for managing and composing visualization options, or interactive shader development by direct manipulation of lenses providing component effects. Our 3D volumetric lens approach differs from other approaches and is one of the first to address efficient composition of multiple lenses. It is well-suited to head-tracked VR environments because it requires no view-dependent generation of major data structures, allowing caching and reuse of full or partial results. A Composite Shader Factory module composes shader programs for rendering composite visual styles and geometry of intersection regions. Geometry is handled by Boolean combinations of region tests in fragment shaders, which allows both convex and nonconvex CSG volumes for lens shape. Efficiency is further addressed by a Region Analyzer module and by broad-phase culling. Finally, we consider the handling of order effects for composed 3D lenses. PMID:20224135

  14. 3D deformable organ model based liver motion tracking in ultrasound videos

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong

    2013-03-01

    This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.

  15. 3D imaging of neutron tracks using confocal microscopy

    NASA Astrophysics Data System (ADS)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  16. Real-time, 3-D ultrasound with multiple transducer arrays.

    PubMed

    Fronheiser, Matthew P; Light, Edward D; Idriss, Salim F; Wolf, Patrick D; Smith, Stephen W

    2006-01-01

    Modifications were made to a commercial real-time, three-dimensional (3-D) ultrasound system for near simultaneous 3-D scanning with two matrix array transducers. As a first illustration, a transducer cable assembly was modified to incorporate two independent, 3-D intra-cardiac echo catheters, a 7 Fr (2.3 mm O.D.) side scanning catheter and a 14 Fr (4.7 mm O.D) forward viewing catheter with accessory port, each catheter using 85 channels operating at 5 MHz. For applications in treatment of atrial fibrillation, the goal is to place the sideviewing catheter within the coronary sinus to view the whole left atrium, including a pulmonary vein. Meanwhile, the forward-viewing catheter inserted within the left atrium is directed toward the ostium of a pulmonary vein for therapy using the integrated accessory port. Using preloaded, phasing data, the scanner switches between catheters automatically, at the push of a button, with a delay of about 1 second, so that the clinician can view the therapy catheter with the coronary sinus catheter and vice versa. Preliminary imaging studies in a tissue phantom and in vivo show that our system successfully guided the forward-viewing catheter toward a target while being imaged with the sideviewing catheter. The forward-viewing catheter then was activated to monitor the target while we mimicked therapy delivery. In the future, the system will switch between 3-D probes on a line-by-line basis and display both volumes simultaneously. PMID:16471436

  17. Real-time 3D change detection of IEDs

    NASA Astrophysics Data System (ADS)

    Wathen, Mitch; Link, Norah; Iles, Peter; Jinkerson, John; Mrstik, Paul; Kusevic, Kresimir; Kovats, David

    2012-06-01

    Road-side bombs are a real and continuing threat to soldiers in theater. CAE USA recently developed a prototype Volume based Intelligence Surveillance Reconnaissance (VISR) sensor platform for IED detection. This vehicle-mounted, prototype sensor system uses a high data rate LiDAR (1.33 million range measurements per second) to generate a 3D mapping of roadways. The mapped data is used as a reference to generate real-time change detection on future trips on the same roadways. The prototype VISR system is briefly described. The focus of this paper is the methodology used to process the 3D LiDAR data, in real-time, to detect small changes on and near the roadway ahead of a vehicle traveling at moderate speeds with sufficient warning to stop the vehicle at a safe distance from the threat. The system relies on accurate navigation equipment to geo-reference the reference run and the change-detection run. Since it was recognized early in the project that detection of small changes could not be achieved with accurate navigation solutions alone, a scene alignment algorithm was developed to register the reference run with the change detection run prior to applying the change detection algorithm. Good success was achieved in simultaneous real time processing of scene alignment plus change detection.

  18. Real-time 3D ultrasound imaging on a next-generation media processor

    NASA Astrophysics Data System (ADS)

    Pagoulatos, Niko; Noraz, Frederic; Kim, Yongmin

    2001-05-01

    3D ultrasound (US) provides physicians with a better understanding of human anatomy. By manipulating the 3D US data set, physicians can observe the anatomy in 3D from a number of different view directions and obtain 2D US images that would not be possible to directly acquire with the US probe. In order for 3D US to be in widespread clinical use, creation and manipulation of the 3D US data should be done at interactive times. This is a challenging task due to the large amount of data to be processed. Our group previously reported interactive 3D US imaging using a programmable mediaprocessor, Texas Instruments TMS320C80, which has been in clinical use. In this work, we present the algorithms we have developed for real-time 3D US using a newer and more powerful mediaprocessor, called MAP-CA. MAP-CA is a very long instruction word (VLIW) processor developed for multimedia applications. It has multiple execution units, a 32-kbyte data cache and a programmable DMA controller called the data streamer (DS). A forward mapping 6 DOF (for a freehand 3D US system based on magnetic position sensor for tracking the US probe) reconstruction algorithm with zero- order interpolation is achieved in 11.8 msec (84.7 frame/sec) per 512x512 8-bit US image. For 3D visualization of the reconstructed 3D US data sets, we used volume rendering and in particular the shear-warp factorization with the maximum intensity projection (MIP) rendering. 3D visualization is achieved in 53.6 msec (18.6 frames/sec) for a 128x128x128 8-bit volume and in 410.3 msec (2.4 frames/sec) for a 256x256x256 8-bit volume.

  19. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice. PMID:20439141

  20. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  1. Coverage Assessment and Target Tracking in 3D Domains

    PubMed Central

    Boudriga, Noureddine; Hamdi, Mohamed; Iyengar, Sitharama

    2011-01-01

    Recent advances in integrated electronic devices motivated the use of Wireless Sensor Networks (WSNs) in many applications including domain surveillance and mobile target tracking, where a number of sensors are scattered within a sensitive region to detect the presence of intruders and forward related events to some analysis center(s). Obviously, sensor deployment should guarantee an optimal event detection rate and should reduce coverage holes. Most of the coverage control approaches proposed in the literature deal with two-dimensional zones and do not develop strategies to handle coverage in three-dimensional domains, which is becoming a requirement for many applications including water monitoring, indoor surveillance, and projectile tracking. This paper proposes efficient techniques to detect coverage holes in a 3D domain using a finite set of sensors, repair the holes, and track hostile targets. To this end, we use the concepts of Voronoi tessellation, Vietoris complex, and retract by deformation. We show in particular that, through a set of iterative transformations of the Vietoris complex corresponding to the deployed sensors, the number of coverage holes can be computed with a low complexity. Mobility strategies are also proposed to repair holes by moving appropriately sensors towards the uncovered zones. The tracking objective is to set a non-uniform WSN coverage within the monitored domain to allow detecting the target(s) by the set of sensors. We show, in particular, how the proposed algorithms adapt to cope with obstacles. Simulation experiments are carried out to analyze the efficiency of the proposed models. To our knowledge, repairing and tracking is addressed for the first time in 3D spaces with different sensor coverage schemes. PMID:22163733

  2. 3-D model-based tracking for UAV indoor localization.

    PubMed

    Teulière, Céline; Marchand, Eric; Eck, Laurent

    2015-05-01

    This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights. PMID:25099967

  3. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max). PMID:18044549

  4. Design and Performance Evaluation on Ultra-Wideband Time-Of-Arrival 3D Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Dusl, John

    2012-01-01

    A three-dimensional (3D) Ultra-Wideband (UWB) Time--of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide and felt upset. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested in the Wireless Habitat Testbed which simulates the ISS module environment. In this presentation, we discuss the 3D TOA tracking algorithm and the performance evaluation based on different tracking baseline configurations. The simulation results show that two configurations of the tracking baseline are feasible. With 100 picoseconds standard deviation (STD) of TOA estimates, the average tracking error 0.2392 feet (about 7 centimeters) can be achieved for configuration Twisted Rectangle while the average tracking error 0.9183 feet (about 28 centimeters) can be achieved for configuration Slightly-Twisted Top Rectangle . The tracking accuracy can be further improved with the improvement of the STD of TOA estimates. With 10 picoseconds STD of TOA estimates, the average tracking error 0.0239 feet (less than 1 centimeter) can be achieved for configuration "Twisted Rectangle".

  5. 3D virtual colonoscopy with real-time volume rendering

    NASA Astrophysics Data System (ADS)

    Wan, Ming; Li, Wei J.; Kreeger, Kevin; Bitter, Ingmar; Kaufman, Arie E.; Liang, Zhengrong; Chen, Dongqing; Wax, Mark R.

    2000-04-01

    In our previous work, we developed a virtual colonoscopy system on a high-end 16-processor SGI Challenge with an expensive hardware graphics accelerator. The goal of this work is to port the system to a low cost PC in order to increase its availability for mass screening. Recently, Mitsubishi Electric has developed a volume-rendering PC board, called VolumePro, which includes 128 MB of RAM and vg500 rendering chip. The vg500 chip, based on Cube-4 technology, can render a 2563 volume at 30 frames per second. High image quality of volume rendering inside the colon is guaranteed by the full lighting model and 3D interpolation supported by the vg500 chip. However, the VolumePro board is lacking some features required by our interactive colon navigation. First, VolumePro currently does not support perspective projection which is paramount for interior colon navigation. Second, the patient colon data is usually much larger than 2563 and cannot be rendered in real-time. In this paper, we present our solutions to these problems, including simulated perspective projection and axis aligned boxing techniques, and demonstrate the high performance of our virtual colonoscopy system on low cost PCs.

  6. 3D harmonic phase tracking with anatomical regularization.

    PubMed

    Zhou, Yitian; Bernard, Olivier; Saloux, Eric; Manrique, Alain; Allain, Pascal; Makram-Ebeid, Sherif; De Craene, Mathieu

    2015-12-01

    This paper presents a novel algorithm that extends HARP to handle 3D tagged MRI images. HARP results were regularized by an original regularization framework defined in an anatomical space of coordinates. In the meantime, myocardium incompressibility was integrated in order to correct the radial strain which is reported to be more challenging to recover. Both the tracking and regularization of LV displacements were done on a volumetric mesh to be computationally efficient. Also, a window-weighted regression method was extended to cardiac motion tracking which helps maintain a low complexity even at finer scales. On healthy volunteers, the tracking accuracy was found to be as accurate as the best candidates of a recent benchmark. Strain accuracy was evaluated on synthetic data, showing low bias and strain errors under 5% (excluding outliers) for longitudinal and circumferential strains, while the second and third quartiles of the radial strain errors are in the (-5%,5%) range. In clinical data, strain dispersion was shown to correlate with the extent of transmural fibrosis. Also, reduced deformation values were found inside infarcted segments. PMID:26363844

  7. 3D visualisation and analysis of single and coalescing tracks in Solid state Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, David; Gillmore, Gavin; Brown, Louise; Petford, Nick

    2010-05-01

    Exposure to radon gas (222Rn) and associated ionising decay products can cause lung cancer in humans (1). Solid state Nuclear Track Detectors (SSNTDs) can be used to monitor radon concentrations (2). Radon particles form tracks in the detectors and these tracks can be etched in order to enable 2D surface image analysis. We have previously shown that confocal microscopy can be used for 3D visualisation of etched SSNTDs (3). The aim of the study was to further investigate track angles and patterns in SSNTDs. A 'LEXT' confocal laser scanning microscope (Olympus Corporation, Japan) was used to acquire 3D image datasets of five CR-39 plastic SSNTD's. The resultant 3D visualisations were analysed by eye and inclination angles assessed on selected tracks. From visual assessment, single isolated tracks as well as coalescing tracks were observed on the etched detectors. In addition varying track inclination angles were observed. Several different patterns of track formation were seen such as single isolated and double coalescing tracks. The observed track angles of inclination may help to assess the angle at which alpha particles hit the detector. Darby, S et al. Radon in homes and risk of lung cancer : collaborative analysis of individual data from 13 European case-control studies. British Medical Journal 2005; 330, 223-226. Phillips, P.S., Denman, A.R., Crockett, R.G.M., Gillmore, G., Groves-Kirkby, C.J., Woolridge, A., Comparative Analysis of Weekly vs. Three monthly radon measurements in dwellings. DEFRA Report No., DEFRA/RAS/03.006. (2004). Wertheim D, Gillmore G, Brown L, and Petford N. A new method of imaging particle tracks in Solid State Nuclear Track Detectors. Journal of Microscopy 2010; 237: 1-6.

  8. Multisensor fusion for 3D target tracking using track-before-detect particle filter

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

    2015-05-01

    This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.

  9. Real-time face tracking

    NASA Astrophysics Data System (ADS)

    Liang, Yufeng; Wilder, Joseph

    1998-10-01

    A real-time face tracker is presented in this paper. The system has achieved 15 frames/second tracking using a Pentium 200 PC with a Datacube MaxPCI image processing board and a Panasonic RGB color camera. It tracks human faces in the camera's field of view while people move freely. A stochastic model to characterize the skin color distribution of human skin is used to segment the face and other skin areas from the background. Median filtering is then used to clean up the background noise. Geometric constraints are applied to the segmented image to extract the face from the background. To reduce computation and achieve real-time tracking, 1D projections (horizontal and vertical) of the image are analyzed instead of the 2D image. Run-length- encoding and frequency domain analysis algorithms are used to separate faces from other skin-like blobs. The system is robust to illumination intensity variations and different skin colors. It can be applied to many human-computer interaction applications such as sound locating, lip- reading, gaze tracking and face recognition.

  10. Holographic microscopy for 3D tracking of bacteria

    NASA Astrophysics Data System (ADS)

    Nadeau, Jay; Cho, Yong Bin; El-Kholy, Marwan; Bedrossian, Manuel; Rider, Stephanie; Lindensmith, Christian; Wallace, J. Kent

    2016-03-01

    Understanding when, how, and if bacteria swim is key to understanding critical ecological and biological processes, from carbon cycling to infection. Imaging motility by traditional light microscopy is limited by focus depth, requiring cells to be constrained in z. Holographic microscopy offers an instantaneous 3D snapshot of a large sample volume, and is therefore ideal in principle for quantifying unconstrained bacterial motility. However, resolving and tracking individual cells is difficult due to the low amplitude and phase contrast of the cells; the index of refraction of typical bacteria differs from that of water only at the second decimal place. In this work we present a combination of optical and sample-handling approaches to facilitating bacterial tracking by holographic phase imaging. The first is the design of the microscope, which is an off-axis design with the optics along a common path, which minimizes alignment issues while providing all of the advantages of off-axis holography. Second, we use anti-reflective coated etalon glass in the design of sample chambers, which reduce internal reflections. Improvement seen with the antireflective coating is seen primarily in phase imaging, and its quantification is presented here. Finally, dyes may be used to increase phase contrast according to the Kramers-Kronig relations. Results using three test strains are presented, illustrating the different types of bacterial motility characterized by an enteric organism (Escherichia coli), an environmental organism (Bacillus subtilis), and a marine organism (Vibrio alginolyticus). Data processing steps to increase the quality of the phase images and facilitate tracking are also discussed.

  11. 3D whiteboard: collaborative sketching with 3D-tracked smart phones

    NASA Astrophysics Data System (ADS)

    Lue, James; Schulze, Jürgen P.

    2014-02-01

    We present the results of our investigation of the feasibility of a new approach for collaborative drawing in 3D, based on Android smart phones. Our approach utilizes a number of fiduciary markers, placed in the working area where they can be seen by the smart phones' cameras, in order to estimate the pose of each phone in the room. Our prototype allows two users to draw 3D objects with their smart phones by moving their phones around in 3D space. For example, 3D lines are drawn by recording the path of the phone as it is moved around in 3D space, drawing line segments on the screen along the way. Each user can see the virtual drawing space on their smart phones' displays, as if the display was a window into this space. Besides lines, our prototype application also supports 3D geometry creation, geometry transformation operations, and it shows the location of the other user's phone.

  12. 3D track initiation in clutter using 2D measurements

    NASA Astrophysics Data System (ADS)

    Lin, Lin; Kirubarajan, Thiagalingam; Bar-Shalom, Yaakov

    2001-11-01

    In this paper we present an algorithm for initiating 3-D tracks using range and azimuth (bearing) measurements from a 2-D radar on a moving platform. The work is motivated by the need to track possibly low-flying targets, e.g., cruise missiles, using reports from an aircraft-based surveillance radar. Previous work on this problem considered simple linear motion in a flat earth coordinate frame. Our research extends this to a more realistic scenario where the earth"s curvature is also considered. The target is assumed to be moving along a great circle at a constant altitude. After the necessary coordinate transformations, the measurements are nonlinear functions of the target state and the observability of target altitude is severely limited. The observability, quantified by the Cramer-Rao Lower Bound (CRLB), is very sensitive to the sensor-to-target geometry. The paper presents a Maximum Likelihood (ML) estimator for estimating the target motion parameters in the Earth Centered Earth Fixed coordinate frame from 2-D range and angle measurements. In order to handle the possibility of false measurements and missed detections, which was not considered in, we use the Probabilistic Data Association (PDA) algorithm to weight the detections in a frame. The PDA-based modified global likelihood is optimized using a numerical search. The accuracies obtained by the resulting ML-PDA estimator are quantified using the CRLB for different sensor-target configurations. It is shown that the proposed estimator is efficient, that is, it meets the CRLB. Of particular interest is the achievable accuracy for estimating the target altitude, which is not observed directly by the 2-D radar, but can be only inferred from the range and bearing observations.

  13. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  14. Ion track reconstruction in 3D using alumina-based fluorescent nuclear track detectors.

    PubMed

    Niklas, M; Bartz, J A; Akselrod, M S; Abollahi, A; Jäkel, O; Greilich, S

    2013-09-21

    Fluorescent nuclear track detectors (FNTDs) based on Al2O3: C, Mg single crystal combined with confocal microscopy provide 3D information on ion tracks with a resolution only limited by light diffraction. FNTDs are also ideal substrates to be coated with cells to engineer cell-fluorescent ion track hybrid detectors (Cell-Fit-HD). This radiobiological tool enables a novel platform linking cell responses to physical dose deposition on a sub-cellular level in proton and heavy ion therapies. To achieve spatial correlation between single ion hits in the cell coating and its biological response the ion traversals have to be reconstructed in 3D using the depth information gained by the FNTD read-out. FNTDs were coated with a confluent human lung adenocarcinoma epithelial (A549) cell layer. Carbon ion irradiation of the hybrid detector was performed perpendicular and angular to the detector surface. In situ imaging of the fluorescently labeled cell layer and the FNTD was performed in a sequential read-out. Making use of the trajectory information provided by the FNTD the accuracy of 3D track reconstruction of single particles traversing the hybrid detector was studied. The accuracy is strongly influenced by the irradiation angle and therefore by complexity of the FNTD signal. Perpendicular irradiation results in highest accuracy with error of smaller than 0.10°. The ability of FNTD technology to provide accurate 3D ion track reconstruction makes it a powerful tool for radiobiological investigations in clinical ion beams, either being used as a substrate to be coated with living tissue or being implanted in vivo. PMID:23965401

  15. Ion track reconstruction in 3D using alumina-based fluorescent nuclear track detectors

    NASA Astrophysics Data System (ADS)

    Niklas, M.; Bartz, J. A.; Akselrod, M. S.; Abollahi, A.; Jäkel, O.; Greilich, S.

    2013-09-01

    Fluorescent nuclear track detectors (FNTDs) based on Al2O3: C, Mg single crystal combined with confocal microscopy provide 3D information on ion tracks with a resolution only limited by light diffraction. FNTDs are also ideal substrates to be coated with cells to engineer cell-fluorescent ion track hybrid detectors (Cell-Fit-HD). This radiobiological tool enables a novel platform linking cell responses to physical dose deposition on a sub-cellular level in proton and heavy ion therapies. To achieve spatial correlation between single ion hits in the cell coating and its biological response the ion traversals have to be reconstructed in 3D using the depth information gained by the FNTD read-out. FNTDs were coated with a confluent human lung adenocarcinoma epithelial (A549) cell layer. Carbon ion irradiation of the hybrid detector was performed perpendicular and angular to the detector surface. In situ imaging of the fluorescently labeled cell layer and the FNTD was performed in a sequential read-out. Making use of the trajectory information provided by the FNTD the accuracy of 3D track reconstruction of single particles traversing the hybrid detector was studied. The accuracy is strongly influenced by the irradiation angle and therefore by complexity of the FNTD signal. Perpendicular irradiation results in highest accuracy with error of smaller than 0.10°. The ability of FNTD technology to provide accurate 3D ion track reconstruction makes it a powerful tool for radiobiological investigations in clinical ion beams, either being used as a substrate to be coated with living tissue or being implanted in vivo.

  16. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  17. Oblique needle segmentation and tracking for 3D TRUS guided prostate brachytherapy

    SciTech Connect

    Wei Zhouping; Gardi, Lori; Downey, Donal B.; Fenster, Aaron

    2005-09-15

    An algorithm was developed in order to segment and track brachytherapy needles inserted along oblique trajectories. Three-dimensional (3D) transrectal ultrasound (TRUS) images of the rigid rod simulating the needle inserted into the tissue-mimicking agar and chicken breast phantoms were obtained to test the accuracy of the algorithm under ideal conditions. Because the robot possesses high positioning and angulation accuracies, we used the robot as a ''gold standard,'' and compared the results of algorithm segmentation to the values measured by the robot. Our testing results showed that the accuracy of the needle segmentation algorithm depends on the needle insertion distance into the 3D TRUS image and the angulations with respect to the TRUS transducer, e.g., at a 10 deg. insertion anglulation in agar phantoms, the error of the algorithm in determining the needle tip position was less than 1 mm when the insertion distance was greater than 15 mm. Near real-time needle tracking was achieved by scanning a small volume containing the needle. Our tests also showed that, the segmentation time was less than 60 ms, and the scanning time was less than 1.2 s, when the insertion distance into the 3D TRUS image was less than 55 mm. In our needle tracking tests in chicken breast phantoms, the errors in determining the needle orientation were less than 2 deg. in robot yaw and 0.7 deg. in robot pitch orientations, for up to 20 deg. needle insertion angles with the TRUS transducer in the horizontal plane when the needle insertion distance was greater than 15 mm.

  18. LayTracks3D: A new approach for meshing general solids using medial axis transform

    SciTech Connect

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to the MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.

  19. Real-Time Camera Guidance for 3d Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Schindler, F.; Förstner, W.

    2012-07-01

    We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  20. Study of a viewer tracking system with multiview 3D display

    NASA Astrophysics Data System (ADS)

    Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping

    2008-02-01

    An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.

  1. A 3D feature point tracking method for ion radiation.

    PubMed

    Kouwenberg, Jasper J M; Ulrich, Leonie; Jäkel, Oliver; Greilich, Steffen

    2016-06-01

    A robust and computationally efficient algorithm for automated tracking of high densities of particles travelling in (semi-) straight lines is presented. It extends the implementation of (Sbalzarini and Koumoutsakos 2005) and is intended for use in the analysis of single ion track detectors. By including information of existing tracks in the exclusion criteria and a recursive cost minimization function, the algorithm is robust to variations on the measured particle tracks. A trajectory relinking algorithm was included to resolve the crossing of tracks in high particle density images. Validation of the algorithm was performed using fluorescent nuclear track detectors (FNTD) irradiated with high- and low (heavy) ion fluences and showed less than 1% faulty trajectories in the latter. PMID:27163162

  2. A 3D feature point tracking method for ion radiation

    NASA Astrophysics Data System (ADS)

    Kouwenberg, Jasper J. M.; Ulrich, Leonie; Jäkel, Oliver; Greilich, Steffen

    2016-06-01

    A robust and computationally efficient algorithm for automated tracking of high densities of particles travelling in (semi-) straight lines is presented. It extends the implementation of (Sbalzarini and Koumoutsakos 2005) and is intended for use in the analysis of single ion track detectors. By including information of existing tracks in the exclusion criteria and a recursive cost minimization function, the algorithm is robust to variations on the measured particle tracks. A trajectory relinking algorithm was included to resolve the crossing of tracks in high particle density images. Validation of the algorithm was performed using fluorescent nuclear track detectors (FNTD) irradiated with high- and low (heavy) ion fluences and showed less than 1% faulty trajectories in the latter.

  3. 3D-Net: the development of a new real-time photogrammetric system

    NASA Astrophysics Data System (ADS)

    Clarke, Timothy A.; Gooch, R. M.; Ariyawansa, Dambakumbure D.; Wang, Xinchi

    1997-07-01

    There are three essential requirements for real-time 3D measurement using targeted points: fast 2D image processing; a fast solution to the correspondence problem; and fast computation of 3D coordinates. This paper brings together research work to produce such solutions and considers other work which has appeared during the project duration.

  4. A 3D diamond detector for particle tracking

    NASA Astrophysics Data System (ADS)

    Artuso, M.; Bachmair, F.; Bäni, L.; Bartosik, M.; Beacham, J.; Bellini, V.; Belyaev, V.; Bentele, B.; Berdermann, E.; Bergonzo, P.; Bes, A.; Brom, J.-M.; Bruzzi, M.; Cerv, M.; Chau, C.; Chiodini, G.; Chren, D.; Cindro, V.; Claus, G.; Collot, J.; Costa, S.; Cumalat, J.; Dabrowski, A.; D`Alessandro, R.; de Boer, W.; Dehning, B.; Dobos, D.; Dünser, M.; Eremin, V.; Eusebi, R.; Forcolin, G.; Forneris, J.; Frais-Kölbl, H.; Gan, K. K.; Gastal, M.; Goffe, M.; Goldstein, J.; Golubev, A.; Gonella, L.; Gorišek, A.; Graber, L.; Grigoriev, E.; Grosse-Knetter, J.; Gui, B.; Guthoff, M.; Haughton, I.; Hidas, D.; Hits, D.; Hoeferkamp, M.; Hofmann, T.; Hosslet, J.; Hostachy, J.-Y.; Hügging, F.; Jansen, H.; Janssen, J.; Kagan, H.; Kanxheri, K.; Kasieczka, G.; Kass, R.; Kassel, F.; Kis, M.; Kramberger, G.; Kuleshov, S.; Lacoste, A.; Lagomarsino, S.; Lo Giudice, A.; Maazouzi, C.; Mandic, I.; Mathieu, C.; McFadden, N.; McGoldrick, G.; Menichelli, M.; Mikuž, M.; Morozzi, A.; Moss, J.; Mountain, R.; Murphy, S.; Oh, A.; Olivero, P.; Parrini, G.; Passeri, D.; Pauluzzi, M.; Pernegger, H.; Perrino, R.; Picollo, F.; Pomorski, M.; Potenza, R.; Quadt, A.; Re, A.; Riley, G.; Roe, S.; Sapinski, M.; Scaringella, M.; Schnetzer, S.; Schreiner, T.; Sciortino, S.; Scorzoni, A.; Seidel, S.; Servoli, L.; Sfyrla, A.; Shimchuk, G.; Smith, D. S.; Sopko, B.; Sopko, V.; Spagnolo, S.; Spanier, S.; Stenson, K.; Stone, R.; Sutera, C.; Taylor, A.; Traeger, M.; Tromson, D.; Trischuk, W.; Tuve, C.; Uplegger, L.; Velthuis, J.; Venturi, N.; Vittone, E.; Wagner, S.; Wallny, R.; Wang, J. C.; Weilhammer, P.; Weingarten, J.; Weiss, C.; Wengler, T.; Wermes, N.; Yamouni, M.; Zavrtanik, M.

    2016-07-01

    In the present study, results towards the development of a 3D diamond sensor are presented. Conductive channels are produced inside the sensor bulk using a femtosecond laser. This electrode geometry allows full charge collection even for low quality diamond sensors. Results from testbeam show that charge is collected by these electrodes. In order to understand the channel growth parameters, with the goal of producing low resistivity channels, the conductive channels produced with a different laser setup are evaluated by Raman spectroscopy.

  5. An optical real-time 3D measurement for analysis of facial shape and movement

    NASA Astrophysics Data System (ADS)

    Zhang, Qican; Su, Xianyu; Chen, Wenjing; Cao, Yiping; Xiang, Liqun

    2003-12-01

    Optical non-contact 3-D shape measurement provides a novel and useful tool for analysis of facial shape and movement in presurgical and postsurgical regular check. In this article we present a system, which allows a precise 3-D visualization of the patient's facial before and after craniofacial surgery. We discussed, in this paper, the real time 3-D image capture, processing and the 3-D phase unwrapping method to recover complex shape deformation when the movement of the mouth. The result of real-time measurement for facial shape and movement will be helpful for the more ideal effect in plastic surgery.

  6. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  7. 3-D tracking in a miniature time projection chamber

    NASA Astrophysics Data System (ADS)

    Vahsen, S. E.; Hedges, M. T.; Jaegle, I.; Ross, S. J.; Seong, I. S.; Thorpe, T. N.; Yamaoka, J.; Kadyk, J. A.; Garcia-Sciveres, M.

    2015-07-01

    The three-dimensional (3-D) detection of millimeter-scale ionization trails is of interest for detecting nuclear recoils in directional fast neutron detectors and in direction-sensitive searches for weakly interacting massive particles (WIMPs), which may constitute the Dark Matter of the universe. We report on performance characterization of a miniature gas target Time Projection Chamber (TPC) where the drift charge is avalanche-multiplied with Gas Electron Multipliers (GEMs) and detected with the ATLAS FE-I3 Pixel Application Specific Integrated Circuit (ASIC). We report on measurements of gain, gain resolution, point resolution, diffusion, angular resolution, and energy resolution with low-energy X-rays, cosmic rays, and alpha particles, using the gases Ar:CO2 (70:30) and He:CO2 (70:30) at atmospheric pressure. We discuss the implications for future, larger directional neutron and Dark Matter detectors. With an eye to designing and selecting components for these, we generalize our results into analytical expressions for detector performance whenever possible. We conclude by demonstrating the 3-D directional detection of a fast neutron source.

  8. THE THOMSON SURFACE. III. TRACKING FEATURES IN 3D

    SciTech Connect

    Howard, T. A.; DeForest, C. E.; Tappin, S. J.; Odstrcil, D.

    2013-03-01

    In this, the final installment in a three-part series on the Thomson surface, we present simulated observations of coronal mass ejections (CMEs) observed by a hypothetical polarizing white light heliospheric imager. Thomson scattering yields a polarization signal that can be exploited to locate observed features in three dimensions relative to the Thomson surface. We consider how the appearance of the CME changes with the direction of trajectory, using simulations of a simple geometrical shape and also of a more realistic CME generated using the ENLIL model. We compare the appearance in both unpolarized B and polarized pB light, and show that there is a quantifiable difference in the measured brightness of a CME between unpolarized and polarized observations. We demonstrate a technique for using this difference to extract the three-dimensional (3D) trajectory of large objects such as CMEs. We conclude with a discussion on how a polarizing heliospheric imager could be used to extract 3D trajectory information about CMEs or other observed features.

  9. Preparation and 3D Tracking of Catalytic Swimming Devices.

    PubMed

    Campbell, Andrew; Archer, Richard; Ebbens, Stephen

    2016-01-01

    We report a method to prepare catalytically active Janus colloids that "swim" in fluids and describe how to determine their 3D motion using fluorescence microscopy. One commonly deployed method for catalytically active colloids to produce enhanced motion is via an asymmetrical distribution of catalyst. Here this is achieved by spin coating a dispersed layer of fluorescent polymeric colloids onto a flat planar substrate, and then using directional platinum vapor deposition to half coat the exposed colloid surface, making a two faced "Janus" structure. The Janus colloids are then re-suspended from the planar substrate into an aqueous solution containing hydrogen peroxide. Hydrogen peroxide serves as a fuel for the platinum catalyst, which is decomposed into water and oxygen, but only on one side of the colloid. The asymmetry results in gradients that produce enhanced motion, or "swimming". A fluorescence microscope, together with a video camera is used to record the motion of individual colloids. The center of the fluorescent emission is found using image analysis to provide an x and y coordinate for each frame of the video. While keeping the microscope focal position fixed, the fluorescence emission from the colloid produces a characteristic concentric ring pattern which is subject to image analysis to determine the particles relative z position. In this way 3D trajectories for the swimming colloid are obtained, allowing swimming velocity to be accurately measured, and physical phenomena such as gravitaxis, which may bias the colloids motion to be detected. PMID:27404327

  10. Tensor3D: A computer graphics program to simulate 3D real-time deformation and visualization of geometric bodies

    NASA Astrophysics Data System (ADS)

    Pallozzi Lavorante, Luca; Dirk Ebert, Hans

    2008-07-01

    Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities.

  11. Study on basic problems in real-time 3D holographic display

    NASA Astrophysics Data System (ADS)

    Jia, Jia; Liu, Juan; Wang, Yongtian; Pan, Yijie; Li, Xin

    2013-05-01

    In recent years, real-time three-dimensional (3D) holographic display has attracted more and more attentions. Since a holographic display can entirely reconstruct the wavefront of an actual 3D scene, it can provide all the depth cues for human eye's observation and perception, and it is believed to be the most promising technology for future 3D display. However, there are several unsolved basic problems for realizing large-size real-time 3D holographic display with a wide field of view. For examples, commercial pixelated spatial light modulators (SLM) always lead to zero-order intensity distortion; 3D holographic display needs a huge number of sampling points for the actual objects or scenes, resulting in enormous computational time; The size and the viewing zone of the reconstructed 3D optical image are limited by the space bandwidth product of the SLM; Noise from the coherent light source as well as from the system severely degrades the quality of the 3D image; and so on. Our work is focused on these basic problems, and some initial results are presented, including a technique derived theoretically and verified experimentally to eliminate the zero-order beam caused by a pixelated phase-only SLM; a method to enlarge the reconstructed 3D image and shorten the reconstruction distance using a concave reflecting mirror; and several algorithms to speed up the calculation of computer generated holograms (CGH) for the display.

  12. FPGA-based real-time anisotropic diffusion filtering of 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj

    2005-02-01

    Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.

  13. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  14. Tracking 3-D body motion for docking and robot control

    NASA Technical Reports Server (NTRS)

    Donath, M.; Sorensen, B.; Yang, G. B.; Starr, R.

    1987-01-01

    An advanced method of tracking three-dimensional motion of bodies has been developed. This system has the potential to dynamically characterize machine and other structural motion, even in the presence of structural flexibility, thus facilitating closed loop structural motion control. The system's operation is based on the concept that the intersection of three planes defines a point. Three rotating planes of laser light, fixed and moving photovoltaic diode targets, and a pipe-lined architecture of analog and digital electronics are used to locate multiple targets whose number is only limited by available computer memory. Data collection rates are a function of the laser scan rotation speed and are currently selectable up to 480 Hz. The tested performance on a preliminary prototype designed for 0.1 in accuracy (for tracking human motion) at a 480 Hz data rate includes a worst case resolution of 0.8 mm (0.03 inches), a repeatability of plus or minus 0.635 mm (plus or minus 0.025 inches), and an absolute accuracy of plus or minus 2.0 mm (plus or minus 0.08 inches) within an eight cubic meter volume with all results applicable at the 95 percent level of confidence along each coordinate region. The full six degrees of freedom of a body can be computed by attaching three or more target detectors to the body of interest.

  15. Modeling cell migration on filamentous tracks in 3D

    NASA Astrophysics Data System (ADS)

    Schwarz, J. M.

    2014-03-01

    Cell motility is integral to a number of physiological processes ranging from wound healing to immune response to cancer metastasis. Many studies of cell migration, both experimental and theoretical, have addressed various aspects of it in two dimensions, including protrusion and retraction at the level of single cells. However, the in vivo environment for a crawling cell is typically a three-dimensional environment, consisting of the extracellular matrix (ECM) and surrounding cells. Recent experiments demonstrate that some cells crawling along fibers of the ECM mimic the geometry of the fibers to become long and thin, as opposed to fan-like in two dimensions, and can remodel the ECM. Inspired by these experiments, a model cell consisting of beads and springs that moves along a tense semiflexible filamentous track is constructed and studied, paying particular attention to the mechanical feedback between the model cell and the track, as mediated by the active myosin-driven contractility and the catch/slip bond behavior of the focal adhesions, as the model cell crawls. This simple construction can then be scaled up to a model cell moving along a three-dimensional filamentous network, with a prescribed microenvironment, in order to make predictions for proposed experiments.

  16. Track of Right-Wheel Drag (3-D)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    [figure removed for brevity, see original site] Figure 2

    This 360-degree stereo panorama combines several frames taken by the navigation camera on NASA's Mars Exploration Rover Spirit during the rover's 313th martian day (Nov. 19, 2004). The site, labeled Spirit site 93, is in the 'Columbia Hills' inside Gusev Crater. The rover tracks point westward. Spirit had driven eastward, in reverse and dragging its right front wheel, for about 30 meters (100 feet) on the day the picture was taken. Driving backwards while dragging that wheel is a precautionary strategy to extend the usefulness of the wheel for when it is most needed, because it has developed more friction than the other wheels. The right-hand track in this look backwards shows how the dragging disturbed the soil. This view is presented in a cylindrical-perspective projection with geometric seam correction.

    Figure 1 is the left-eye view of a stereo pair and Figure 2 is the right-eye view of a stereo pair.

  17. Processing 3D flash LADAR point-clouds in real-time for flight applications

    NASA Astrophysics Data System (ADS)

    Craig, R.; Gravseth, I.; Earhart, R. P.; Bladt, J.; Barnhill, S.; Ruppert, L.; Centamore, C.

    2007-04-01

    Ball Aerospace & Technologies Corp. has demonstrated real-time processing of 3D imaging LADAR point-cloud data to produce the industry's first time-of-flight (TOF) 3D video capability. This capability is uniquely suited to the rigorous demands of space and airborne flight applications and holds great promise in the area of autonomous navigation. It will provide long-range, three dimensional video information to autonomous flight software or pilots for immediate use in rendezvous and docking, proximity operations, landing, surface vision systems, and automatic target recognition and tracking. This is enabled by our new generation of FPGA based "pixel-tube" processors, coprocessors and their associated algorithms which have led to a number of advancements in high-speed wavefront processing along with additional advances in dynamic camera control, and space laser designs based on Ball's CALIPSO LIDAR. This evolution in LADAR is made possible by moving the mechanical complexity required for a scanning system into the electronics, where production, integration, testing and life-cycle costs can be significantly reduced. This technique requires a state of the art TOF read-out integrated circuit (ROIC) attached to a sensor array to collect high resolution temporal data, which is then processed through FPGAs. The number of calculations required to process the data is greatly reduced thanks to the fact that all points are captured at the same time and thus correlated. This correlation allows extremely efficient FPGA processing. This capability has been demonstrated in prototype form at both Marshal Space Flight Center and Langley Research Center on targets that represent docking and landing scenarios. This report outlines many aspects of this work as well as aspects of our recent testing at Marshall's Flight Robotics Laboratory.

  18. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  19. Ultra-Wideband Time-Difference-of-Arrival High Resolution 3D Proximity Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2010-01-01

    This paper describes a research and development effort for a prototype ultra-wideband (UWB) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being studied for use in tracking of lunar./Mars rovers and astronauts during early exploration missions when satellite navigation systems are not available. U IATB impulse radio (UWB-IR) technology is exploited in the design and implementation of the prototype location and tracking system. A three-dimensional (3D) proximity tracking prototype design using commercially available UWB products is proposed to implement the Time-Difference- Of-Arrival (TDOA) tracking methodology in this research effort. The TDOA tracking algorithm is utilized for location estimation in the prototype system, not only to exploit the precise time resolution possible with UWB signals, but also to eliminate the need for synchronization between the transmitter and the receiver. Simulations show that the TDOA algorithm can achieve the fine tracking resolution with low noise TDOA estimates for close-in tracking. Field tests demonstrated that this prototype UWB TDOA High Resolution 3D Proximity Tracking System is feasible for providing positioning-awareness information in a 3D space to a robotic control system. This 3D tracking system is developed for a robotic control system in a facility called "Moonyard" at Honeywell Defense & System in Arizona under a Space Act Agreement.

  20. A new 3D tracking method exploiting the capabilities of digital holography in microscopy

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Merola, F.; Fusco, S.; Embrione, V.; Netti, P. A.; Ferraro, P.

    2013-04-01

    A method for 3D tracking has been developed exploiting Digital Holographic Microscopy (DHM) features. In the framework of self-consistent platform for manipulation and measurement of biological specimen we use DHM for quantitative and completely label free analysis of specimen with low amplitude contrast. Tracking capability extend the potentiality of DHM allowing to monitor the motion of appropriate probes and correlate it with sample properties. Complete 3D tracking has been obtained for the probes avoiding the issue of amplitude refocusing in traditional tracking processing. Our technique belongs to the video tracking methods that, conversely from Quadrant Photo-Diode method, opens the possibility to track multiples probes. All the common used video tracking algorithms are based on the numerical analysis of amplitude images in the focus plane and the shift of the maxima in the image plane are measured after the application of an appropriate threshold. Our approach for video tracking uses different theoretical basis. A set of interferograms is recorded and the complex wavefields are managed numerically to obtain three dimensional displacements of the probes. The procedure works properly on an higher number of probes and independently from their size. This method overcomes the traditional video tracking issues as the inability to measure the axial movement and the choice of suitable threshold mask. The novel configuration allows 3D tracking of micro-particles and simultaneously can furnish Quantitative Phase-contrast maps of tracked micro-objects by interference microscopy, without changing the configuration. In this paper, we show a new concept for a compact interferometric microscope that can ensure the multifunctionality, accomplishing accurate 3D tracking and quantitative phase-contrast analysis. Experimental results are presented and discussed for in vitro cells. Through a very simple and compact optical arrangement we show how two different functionalities

  1. LV motion tracking from 3D echocardiography using textural and structural information.

    PubMed

    Myronenko, Andriy; Song, Xubo; Sahn, David J

    2007-01-01

    Automated motion reconstruction of the left ventricle (LV) from 3D echocardiography provides insight into myocardium architecture and function. Low image quality and artifacts make 3D ultrasound image processing a challenging problem. We introduce a LV tracking method, which combines textural and structural information to overcome the image quality limitations. Our method automatically reconstructs the motion of the LV contour (endocardium and epicardium) from a sequence of 3D ultrasound images. PMID:18044597

  2. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    SciTech Connect

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-02-15

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation.

  3. Holovideo: Real-time 3D range video encoding and decoding on GPU

    NASA Astrophysics Data System (ADS)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  4. Real-Time Head Pose Tracking with Online Face Template Reconstruction.

    PubMed

    Li, Songnan; Ngan, King Ngi; Paramesran, Raveendran; Sheng, Lu

    2016-09-01

    We propose a real-time method to accurately track the human head pose in the 3-dimensional (3D) world. Using a RGB-Depth camera, a face template is reconstructed by fitting a 3D morphable face model, and the head pose is determined by registering this user-specific face template to the input depth video. PMID:26584487

  5. A Framework for 3D Model-Based Visual Tracking Using a GPU-Accelerated Particle Filter.

    PubMed

    Brown, J A; Capson, D W

    2012-01-01

    A novel framework for acceleration of particle filtering approaches to 3D model-based, markerless visual tracking in monocular video is described. Specifically, we present a methodology for partitioning and mapping the computationally expensive weight-update stage of a particle filter to a graphics processing unit (GPU) to achieve particle- and pixel-level parallelism. Nvidia CUDA and Direct3D are employed to harness the massively parallel computational power of modern GPUs for simulation (3D model rendering) and evaluation (segmentation, feature extraction, and weight calculation) of hundreds of particles at high speeds. The proposed framework addresses the computational intensity that is intrinsic to all particle filter approaches, including those that have been modified to minimize the number of particles required for a particular task. Performance and tracking quality results for rigid object and articulated hand tracking experiments demonstrate markerless, model-based visual tracking on consumer-grade graphics hardware with pixel-level accuracy up to 95 percent at 60+ frames per second. The framework accelerates particle evaluation up to 49 times over a comparable CPU-only implementation, providing an increased particle count while maintaining real-time frame rates. PMID:21301027

  6. High-resolution real-time 3D shape measurement on a portable device

    NASA Astrophysics Data System (ADS)

    Karpinsky, Nikolaus; Hoke, Morgan; Chen, Vincent; Zhang, Song

    2013-09-01

    Recent advances in technology have enabled the acquisition of high-resolution 3D models in real-time though the use of structured light scanning techniques. While these advances are impressive, they require large amounts of computing power, thus being limited to using large desktop computers with high end CPUs and sometimes GPUs. This is undesirable in making high-resolution real-time 3D scanners ubiquitous in our mobile lives. To address this issue, this work describes and demonstrates a real-time 3D scanning system that is realized on a mobile device, namely a laptop computer, which can achieve speeds of 20fps 3D at a resolution of 640x480 per frame. By utilizing a graphics processing unit (GPU) as a multipurpose parallel processor, along with a parallel phase shifting technique, we are able to realize the entire 3D processing pipeline in parallel. To mitigate high speed camera transfer problems, which typically require a dedicated frame grabber, we make use of USB 3.0 along with direct memory access (DMA) to transfer camera images to the GPU. To demonstrate the effectiveness of the technique, we experiment with the scanner on both static geometry of a statue and dynamic geometry of a deforming material sample in front of the system.

  7. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  8. 3D model-based catheter tracking for motion compensation in EP procedures

    NASA Astrophysics Data System (ADS)

    Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

    2010-02-01

    Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation can take advantage of overlay images derived from pre-operative 3-D data to add anatomical details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair the utility of these static overlay images for catheter navigation. We developed an approach for image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system is used to take X-ray images of a special circumferential mapping catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data and clinical data were used to assess our model-based catheter tracking method. Experiments involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average 3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking error of 1.0 mm +/- 0.4 mm and an average 3-D tracking error of 0.8 mm +/- 0.5 mm. These results demonstrate that model-based motion-compensation based on 2-D/3-D registration is both feasible and accurate.

  9. LayTracks3D: A new approach for meshing general solids using medial axis transform

    DOE PAGESBeta

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less

  10. Real-Time 3D Contrast-Enhanced Transcranial Ultrasound and Aberration Correction

    PubMed Central

    Ivancevich, Nikolas M.; Pinton, Gianmarco F.; Nicoletto, Heather A.; Bennett, Ellen; Laskowitz, Daniel T.; Smith, Stephen W.

    2008-01-01

    Contrast-enhanced (CE) transcranial ultrasound (US) and reconstructed 3D transcranial ultrasound have shown advantages over traditional methods in a variety of cerebrovascular diseases. We present the results from a novel ultrasound technique, namely real-time 3D contrast-enhanced transcranial ultrasound. Using real-time 3D (RT3D) ultrasound and micro-bubble contrast agent, we scanned 17 healthy volunteers via a single temporal window and 9 via the sub-occipital window and report our detection rates for the major cerebral vessels. In 71% of subjects, both of our observers identified the ipsilateral circle of Willis from the temporal window, and in 59% we imaged the entire circle of Willis. From the sub-occipital window, both observers detected the entire vertebrobasilar circulation in 22% of subjects, and in 44% the basilar artery. After performing phase aberration correction on one subject, we were able to increase the diagnostic value of the scan, detecting a vessel not present in the uncorrected scan. These preliminary results suggest that RT3D CE transcranial US and RT3D CE transcranial US with phase aberration correction have the potential to greatly impact the field of neurosonology. PMID:18395321

  11. High-resolution, real-time simultaneous 3D surface geometry and temperature measurement.

    PubMed

    An, Yatong; Zhang, Song

    2016-06-27

    This paper presents a method to simultaneously measure three-dimensional (3D) surface geometry and temperature in real time. Specifically, we developed 1) a holistic approach to calibrate both a structured light system and a thermal camera under exactly the same world coordinate system even though these two sensors do not share the same wavelength; and 2) a computational framework to determine the sub-pixel corresponding temperature for each 3D point as well as discard those occluded points. Since the thermal 2D imaging and 3D visible imaging systems do not share the same spectrum of light, they can perform sensing simultaneously in real time: we developed a hardware system that can achieve real-time 3D geometry and temperature measurement at 26 Hz with 768 × 960 points per frame. PMID:27410608

  12. Confocal fluorometer for diffusion tracking in 3D engineered tissue constructs

    NASA Astrophysics Data System (ADS)

    Daly, D.; Zilioli, A.; Tan, N.; Buttenschoen, K.; Chikkanna, B.; Reynolds, J.; Marsden, B.; Hughes, C.

    2016-03-01

    We present results of the development of a non-contacting instrument, called fScan, based on scanning confocal fluorometry for assessing the diffusion of materials through a tissue matrix. There are many areas in healthcare diagnostics and screening where it is now widely accepted that the need for new quantitative monitoring technologies is a major pinch point in patient diagnostics and in vitro testing. With the increasing need to interpret 3D responses this commonly involves the need to track the diffusion of compounds, pharma-active species and cells through a 3D matrix of tissue. Methods are available but to support the advances that are currently only promised, this monitoring needs to be real-time, non-invasive, and economical. At the moment commercial meters tend to be invasive and usually require a sample of the medium to be removed and processed prior to testing. This methodology clearly has a number of significant disadvantages. fScan combines a fiber based optical arrangement with a compact, free space optical front end that has been integrated so that the sample's diffusion can be measured without interference. This architecture is particularly important due to the "wet" nature of the samples. fScan is designed to measure constructs located within standard well plates and a 2-D motion stage locates the required sample with respect to the measurement system. Results are presented that show how the meter has been used to evaluate movements of samples through collagen constructs in situ without disturbing their kinetic characteristics. These kinetics were little understood prior to these measurements.

  13. Multi-modality fusion of CT, 3D ultrasound, and tracked strain images for breast irradiation planning

    NASA Astrophysics Data System (ADS)

    Foroughi, Pezhman; Csoma, Csaba; Rivaz, Hassan; Fichtinger, Gabor; Zellars, Richard; Hager, Gregory; Boctor, Emad

    2009-02-01

    Breast irradiation significantly reduces the risk of recurrence of cancer. There is growing evidence suggesting that irradiation of only the involved area of the breast, partial breast irradiation (PBI), is as effective as whole breast irradiation. Benefits of PBI include shortened treatment time, and perhaps fewer side effects as less tissue is treated. However, these benefits cannot be realized without precise and accurate localization of the lumpectomy cavity. Several studies have shown that accurate delineation of the cavity in CT scans is very challenging and the delineated volumes differ dramatically over time and among users. In this paper, we propose utilizing 3D ultrasound (3D-US) and tracked strain images as complementary modalities to reduce uncertainties associated with current CT planning workflow. We present the early version of an integrated system that fuses 3D-US and real-time strain images. For the first time, we employ tracking information to reduce the noise in calculation of strain image by choosing the properly compressed frames and to position the strain image within the ultrasound volume. Using this system, we provide the tools to retrieve additional information from 3D-US and strain image alongside the CT scan. We have preliminarily evaluated our proposed system in a step-by-step fashion using a breast phantom and clinical experiments.

  14. Sketch on dynamic gesture tracking and analysis exploiting vision-based 3D interface

    NASA Astrophysics Data System (ADS)

    Woo, Woontack; Kim, Namgyu; Wong, Karen; Tadenuma, Makoto

    2000-12-01

    In this paper, we propose a vision-based 3D interface exploiting invisible 3D boxes, arranged in the personal space (i.e. reachable space by the body without traveling), which allows robust yet simple dynamic gesture tracking and analysis, without exploiting complicated sensor-based motion tracking systems. Vision-based gesture tracking and analysis is still a challenging problem, even though we have witnessed rapid advances in computer vision over the last few decades. The proposed framework consists of three main parts, i.e. (1) object segmentation without bluescreen and 3D box initialization with depth information, (2) movement tracking by observing how the body passes through the 3D boxes in the personal space and (3) movement feature extraction based on Laban's Effort theory and movement analysis by mapping features to meaningful symbols using time-delay neural networks. Obviously, exploiting depth information using multiview images improves the performance of gesture analysis by reducing the errors introduced by simple 2D interfaces In addition, the proposed box-based 3D interface lessens the difficulties in both tracking movement in 3D space and in extracting low-level features of the movement. Furthermore, the time-delay neural networks lessens the difficulties in movement analysis by training. Due to its simplicity and robustness, the framework will provide interactive systems, such as ATR I-cubed Tangible Music System or ATR Interactive Dance system, with improved quality of the 3D interface. The proposed simple framework also can be extended to other applications requiring dynamic gesture tracking and analysis on the fly.

  15. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects. PMID:19505502

  16. Computational Graph Model for 3D Cells Tracking in Zebra Fish Datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Lelin; Xiong, Hongkai; Zhao, Yang; Zhang, Kai; Zhou, Xiaobo

    2007-11-01

    This paper leads to a novel technique for tracking and identification of zebra-fish cells in 3D image sequences, extending graph-based multi-objects tracking algorithm to 3D applications. As raised in previous work of 2D graph-based method, separated cells are modeled as vertices that connected by edges. Then the tracking work is simplified to that of vertices matching between graphs generated from consecutive frames. Graph-based tracking is composed of three steps: graph generation, initial source vertices selection and graph saturation. To satisfy demands in this work separated cell records are segmented from original datasets using 3D level-set algorithms. Besides, advancements are achieved in each of the step including graph regulations, multi restrictions on source vertices and enhanced flow quantifications. Those strategies make a good compensation for graph-based multi-objects tracking method in 2D space. Experiments are carried out in 3D datasets sampled from zebra fish, results of which shows that this enhanced method could be potentially applied to tracking of objects with diverse features.

  17. Real-time multispectral 3-D photoacoustic imaging of blood phantoms

    NASA Astrophysics Data System (ADS)

    Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging is exquisitely sensitive to blood and can infer blood oxygenation based on multispectral images. In this work we present multispectral real-time 3D photoacoustic imaging of blood phantoms. We used a custom-built 128-channel hemispherical transducer array coupled to two Nd:YAG pumped OPO laser systems synchronized to provide double pulse excitation at 680 nm and 1064 nm wavelengths, all during a triggered series of ultrasound pressure measurements lasting less than 300 μs. The results demonstrated that 3D PAI is capable of differentiating between oxygenated and deoxygenated blood at high speed at mm-level resolution.

  18. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    NASA Astrophysics Data System (ADS)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  19. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  20. High-throughput 3D tracking of bacteria on a standard phase contrast microscope

    NASA Astrophysics Data System (ADS)

    Taute, K. M.; Gude, S.; Tans, S. J.; Shimizu, T. S.

    2015-11-01

    Bacteria employ diverse motility patterns in traversing complex three-dimensional (3D) natural habitats. 2D microscopy misses crucial features of 3D behaviour, but the applicability of existing 3D tracking techniques is constrained by their performance or ease of use. Here we present a simple, broadly applicable, high-throughput 3D bacterial tracking method for use in standard phase contrast microscopy. Bacteria are localized at micron-scale resolution over a range of 350 × 300 × 200 μm by maximizing image cross-correlations between their observed diffraction patterns and a reference library. We demonstrate the applicability of our technique to a range of bacterial species and exploit its high throughput to expose hidden contributions of bacterial individuality to population-level variability in motile behaviour. The simplicity of this powerful new tool for bacterial motility research renders 3D tracking accessible to a wider community and paves the way for investigations of bacterial motility in complex 3D environments.

  1. High-throughput 3D tracking of bacteria on a standard phase contrast microscope

    PubMed Central

    Taute, K.M.; Gude, S.; Tans, S.J.; Shimizu, T.S.

    2015-01-01

    Bacteria employ diverse motility patterns in traversing complex three-dimensional (3D) natural habitats. 2D microscopy misses crucial features of 3D behaviour, but the applicability of existing 3D tracking techniques is constrained by their performance or ease of use. Here we present a simple, broadly applicable, high-throughput 3D bacterial tracking method for use in standard phase contrast microscopy. Bacteria are localized at micron-scale resolution over a range of 350 × 300 × 200 μm by maximizing image cross-correlations between their observed diffraction patterns and a reference library. We demonstrate the applicability of our technique to a range of bacterial species and exploit its high throughput to expose hidden contributions of bacterial individuality to population-level variability in motile behaviour. The simplicity of this powerful new tool for bacterial motility research renders 3D tracking accessible to a wider community and paves the way for investigations of bacterial motility in complex 3D environments. PMID:26522289

  2. Tracking 3D Picometer-Scale Motions of Single Nanoparticles with High-Energy Electron Probes

    PubMed Central

    Ogawa, Naoki; Hoshisashi, Kentaro; Sekiguchi, Hiroshi; Ichiyanagi, Kouhei; Matsushita, Yufuku; Hirohata, Yasuhisa; Suzuki, Seiichi; Ishikawa, Akira; Sasaki, Yuji C.

    2013-01-01

    We observed the high-speed anisotropic motion of an individual gold nanoparticle in 3D at the picometer scale using a high-energy electron probe. Diffracted electron tracking (DET) using the electron back-scattered diffraction (EBSD) patterns of labeled nanoparticles under wet-SEM allowed us to super-accurately measure the time-resolved 3D motion of individual nanoparticles in aqueous conditions. The highly precise DET data corresponded to the 3D anisotropic log-normal Gaussian distributions over time at the millisecond scale. PMID:23868465

  3. Real-time 3D surface-image-guided beam setup in radiotherapy of breast cancer

    SciTech Connect

    Djajaputra, David; Li Shidong

    2005-01-01

    We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 deg. in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system.

  4. 3D real-time measurement system of seam with laser

    NASA Astrophysics Data System (ADS)

    Huang, Min-shuang; Huang, Jun-fen

    2014-02-01

    3-D Real-time Measurement System of seam outline based on Moiré Projection is proposed and designed. The system is composed of LD, grating, CCD, video A/D, FPGA, DSP and an output interface. The principle and hardware makeup of high-speed and real-time image processing circuit based on a Digital Signal Processor (DSP) and a Field Programmable Gate Array (FPGA) are introduced. Noise generation mechanism in poor welding field conditions is analyzed when Moiré stripes are projected on a welding workpiece surface. Median filter is adopted to smooth the acquired original laser image of seam, and then measurement results of a 3-D outline image of weld groove are provided.

  5. Detailed Evaluation of Five 3D Speckle Tracking Algorithms Using Synthetic Echocardiographic Recordings.

    PubMed

    Alessandrini, Martino; Heyde, Brecht; Queiros, Sandro; Cygan, Szymon; Zontak, Maria; Somphone, Oudom; Bernard, Olivier; Sermesant, Maxime; Delingette, Herve; Barbosa, Daniel; De Craene, Mathieu; ODonnell, Matthew; Dhooge, Jan

    2016-08-01

    A plethora of techniques for cardiac deformation imaging with 3D ultrasound, typically referred to as 3D speckle tracking techniques, are available from academia and industry. Although the benefits of single methods over alternative ones have been reported in separate publications, the intrinsic differences in the data and definitions used makes it hard to compare the relative performance of different solutions. To address this issue, we have recently proposed a framework to simulate realistic 3D echocardiographic recordings and used it to generate a common set of ground-truth data for 3D speckle tracking algorithms, which was made available online. The aim of this study was therefore to use the newly developed database to contrast non-commercial speckle tracking solutions from research groups with leading expertise in the field. The five techniques involved cover the most representative families of existing approaches, namely block-matching, radio-frequency tracking, optical flow and elastic image registration. The techniques were contrasted in terms of tracking and strain accuracy. The feasibility of the obtained strain measurements to diagnose pathology was also tested for ischemia and dyssynchrony. PMID:26960220

  6. Vision-Based Long-Range 3D Tracking, applied to Underground Surveying Tasks

    NASA Astrophysics Data System (ADS)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes; Chmelina, Klaus

    2014-04-01

    To address the need of highly automated positioning systems in underground construction, we present a long-range 3D tracking system based on infrared optical markers. It provides continuous 3D position estimation of static or kinematic targets with low latency over a tracking volume of 12 m x 8 m x 70 m (width x height x depth). Over the entire volume, relative 3D point accuracy with a maximal deviation ≤ 22 mm is ensured with possible target rotations of yaw, pitch = 0 - 45° and roll = 0 - 360°. No preliminary sighting of target(s) is necessary since the system automatically locks onto a target without user intervention and autonomously starts tracking as soon as a target is within the view of the system. The proposed system needs a minimal hardware setup, consisting of two machine vision cameras and a standard workstation for data processing. This allows for quick installation with minimal disturbance of construction work. The data processing pipeline ensures camera calibration and tracking during on-going underground activities. Tests in real underground scenarios prove the system's capabilities to act as 3D position measurement platform for multiple underground tasks that require long range, low latency and high accuracy. Those tasks include simultaneously tracking of personnel, machines or robots.

  7. 3D target tracking in infrared imagery by SIFT-based distance histograms

    NASA Astrophysics Data System (ADS)

    Yan, Ruicheng; Cao, Zhiguo

    2011-11-01

    SIFT tracking algorithm is an excellent point-based tracking algorithm, which has high tracking performance and accuracy due to its robust capability against rotation, scale change and occlusion. However, when tracking a huge 3D target in complicated real scenarios in a forward-looking infrared (FLIR) image sequence taken from an airborne moving platform, the tracked point locating in the vertical surface usually shifts away from the correct position. In this paper, we propose a novel algorithm for 3D target tracking in FLIR image sequences. Our approach uses SIFT keypoints detected in consecutive frames for point correspondence. The candidate position of the tracked point is firstly estimated by computing the affine transformation using local corresponding SIFT keypoints. Then the correct position is located via an optimal method. Euclidean distances between a candidate point and SIFT keypoints nearby are calculated and formed into a SIFT-based distance histogram. The distance histogram is defined a cost of associating each candidate point to a correct tracked point using the constraint based on the topology of each candidate point with its surrounding SIFT keypoints. Minimization of the cost is formulated as a combinatorial optimization problem. Experiments demonstrate that the proposed algorithm efficiently improves the tracking performance and accuracy.

  8. Real-time visual tracking of less textured three-dimensional objects on mobile platforms

    NASA Astrophysics Data System (ADS)

    Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2012-12-01

    Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.

  9. Note: Time-gated 3D single quantum dot tracking with simultaneous spinning disk imaging

    NASA Astrophysics Data System (ADS)

    DeVore, M. S.; Stich, D. G.; Keller, A. M.; Cleyrat, C.; Phipps, M. E.; Hollingsworth, J. A.; Lidke, D. S.; Wilson, B. S.; Goodwin, P. M.; Werner, J. H.

    2015-12-01

    We describe recent upgrades to a 3D tracking microscope to include simultaneous Nipkow spinning disk imaging and time-gated single-particle tracking (SPT). Simultaneous 3D molecular tracking and spinning disk imaging enable the visualization of cellular structures and proteins around a given fluorescently labeled target molecule. The addition of photon time-gating to the SPT hardware improves signal to noise by discriminating against Raman scattering and short-lived fluorescence. In contrast to camera-based SPT, single-photon arrival times are recorded, enabling time-resolved spectroscopy (e.g., measurement of fluorescence lifetimes and photon correlations) to be performed during single molecule/particle tracking experiments.

  10. Rapid 3D Track Reconstruction with the BaBar Trigger Upgrade

    SciTech Connect

    Bailey, S

    2004-05-24

    As the PEP-II luminosity increases the BaBar trigger and dataflow systems must accommodate the increasing data rate. A significant source of background events at the first trigger level comes from beam particle interactions with the beampipe and synchrotron masks, which are separated from the interaction region by more than 20 cm. The BaBar trigger upgrade will provide 3D tracking capabilities at the first trigger level in order to remove background events by distinguishing the origin of particle tracks. Each new z{sub 0} p{sub T} Discriminator (ZPD) board processes over 1 gigabyte of data per second in order to reconstruct the tracks and make trigger decisions based upon the 3D track parameters.

  11. Note: Time-gated 3D single quantum dot tracking with simultaneous spinning disk imaging

    SciTech Connect

    DeVore, M. S.; Stich, D. G.; Keller, A. M.; Phipps, M. E.; Hollingsworth, J. A.; Goodwin, P. M.; Werner, J. H.; Cleyrat, C.; Lidke, D. S.; Wilson, B. S.

    2015-12-15

    We describe recent upgrades to a 3D tracking microscope to include simultaneous Nipkow spinning disk imaging and time-gated single-particle tracking (SPT). Simultaneous 3D molecular tracking and spinning disk imaging enable the visualization of cellular structures and proteins around a given fluorescently labeled target molecule. The addition of photon time-gating to the SPT hardware improves signal to noise by discriminating against Raman scattering and short-lived fluorescence. In contrast to camera-based SPT, single-photon arrival times are recorded, enabling time-resolved spectroscopy (e.g., measurement of fluorescence lifetimes and photon correlations) to be performed during single molecule/particle tracking experiments.

  12. Real-Time 3d Reconstruction from Images Taken from AN Uav

    NASA Astrophysics Data System (ADS)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  13. Feasibility of low-dose single-view 3D fiducial tracking concurrent with external beam delivery

    SciTech Connect

    Speidel, Michael A.; Wilfley, Brian P.; Hsu, Annie; Hristov, Dimitre

    2012-04-15

    Purpose: In external-beam radiation therapy, existing on-board x-ray imaging chains orthogonal to the delivery beam cannot recover 3D target trajectories from a single view in real-time. This limits their utility for real-time motion management concurrent with beam delivery. To address this limitation, the authors propose a novel concept for on-board imaging based on the inverse-geometry Scanning-Beam Digital X-ray (SBDX) system and evaluate its feasibility for single-view 3D intradelivery fiducial tracking. Methods: A chest phantom comprising a posterior wall, a central lung volume, and an anterior wall was constructed. Two fiducials were placed along the mediastinal ridge between the lung cavities: a 1.5 mm diameter steel sphere superiorly and a gold cylinder (2.6 mm length x 0.9 mm diameter) inferiorly. The phantom was placed on a linear motion stage that moved sinusoidally. Fiducial motion was along the source-detector (z) axis of the SBDX system with {+-}10 mm amplitude and a programmed period of either 3.5 s or 5 s. The SBDX system was operated at 15 frames per second, 100 kVp, providing good apparent conspicuity of the fiducials. With the stage moving, detector data were acquired and subsequently reconstructed into 15 planes with a 12 mm plane-to-plane spacing using digital tomosynthesis. A tracking algorithm was applied to the image planes for each temporal frame to determine the position of each fiducial in (x,y,z)-space versus time. A 3D time-sinusoidal motion model was fit to the measured 3D coordinates and root mean square (RMS) deviations about the fitted trajectory were calculated. Results: Tracked motion was sinusoidal and primarily along the source-detector (z) axis. The RMS deviation of the tracked z-coordinate ranged from 0.53 to 0.71 mm. The motion amplitude derived from the model fit agreed with the programmed amplitude to within 0.28 mm for the steel sphere and within -0.77 mm for the gold seed. The model fit periods agreed with the programmed

  14. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    NASA Astrophysics Data System (ADS)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  15. Moving Human Path Tracking Based on Video Surveillance in 3d Indoor Scenarios

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zlatanova, Sisi; Wang, Zhe; Zhang, Yeting; Liu, Liu

    2016-06-01

    Video surveillance systems are increasingly used for a variety of 3D indoor applications. We can analyse human behaviour, discover and avoid crowded areas, monitor human traffic and so forth. In this paper we concentrate on use of surveillance cameras to track and reconstruct the path a person has followed. For the purpose we integrated video surveillance data with a 3D indoor model of the building and develop a single human moving path tracking method. We process the surveillance videos to detected single human moving traces; then we match the depth information of 3D scenes to the constructed 3D indoor network model and define the human traces in the 3D indoor space. Finally, the single human traces extracted from multiple cameras are connected with the help of the connectivity provided by the 3D network model. Using this approach, we can reconstruct the entire walking path. The provided experiments with a single person have verified the effectiveness and robustness of the method.

  16. A full-parallax 3D display with restricted viewing zone tracking viewer's eye

    NASA Astrophysics Data System (ADS)

    Beppu, Naoto; Yendo, Tomohiro

    2015-03-01

    The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.

  17. The BaBar Level 1 Drift-Chamber Trigger Upgrade With 3D Tracking

    SciTech Connect

    Chai, X.D.; /Iowa U.

    2005-11-29

    At BABAR, the Level 1 Drift Chamber trigger is being upgraded to reduce increasing background rates while the PEP-II luminosity keeps improving. This upgrade uses the drift time information and stereo wires in the drift chamber to perform a 3D track reconstruction that effectively rejects background events spread out along the beam line.

  18. 3D single molecule tracking in thick cellular specimens using multifocal plane microscopy

    NASA Astrophysics Data System (ADS)

    Ram, Sripad; Ward, E. Sally; Ober, Raimund J.

    2011-03-01

    One of the major challenges in single molecule microscopy concerns 3D tracking of single molecules in cellular specimens. This has been a major impediment to study many fundamental cellular processes, such as protein transport across thick cellular specimens (e.g. a cell-monolayer). Here we show that multifocal plane microscopy (MUM), an imaging modality developed by our group, provides the much needed solution to this longstanding problem. While MUM was previously used for 3D single molecule tracking at shallow depths (~ 1 micron) in live-cells, the question arises if MUM can also live up to the significant challenge of tracking single molecules in thick samples. Here by substantially expanding the capabilities of MUM, we demonstrate 3D tracking of quantum-dot labeled molecules in a ~ 10 micron thick cell monolayer. In this way we have reconstructed the complete 3D intracellular trafficking itinerary of single molecules at high spatial and temporal precision in a thick cell-sample. Funding support: NIH and the National MS Society.

  19. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  20. Mesoscopic in vivo 3-D tracking of sparse cell populations using angular multiplexed optical projection tomography

    PubMed Central

    Chen, Lingling; Alexandrov, Yuriy; Kumar, Sunil; Andrews, Natalie; Dallman, Margaret J.; French, Paul M. W.; McGinty, James

    2015-01-01

    We describe an angular multiplexed imaging technique for 3-D in vivo cell tracking of sparse cell distributions and optical projection tomography (OPT) with superior time-lapse resolution and a significantly reduced light dose compared to volumetric time-lapse techniques. We demonstrate that using dual axis OPT, where two images are acquired simultaneously at different projection angles, can enable localization and tracking of features in 3-D with a time resolution equal to the camera frame rate. This is achieved with a 200x reduction in light dose compared to an equivalent volumetric time-lapse single camera OPT acquisition with 200 projection angles. We demonstrate the application of this technique to mapping the 3-D neutrophil migration pattern observed over ~25.5 minutes in a live 2 day post-fertilisation transgenic LysC:GFP zebrafish embryo following a tail wound. PMID:25909009

  1. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    NASA Astrophysics Data System (ADS)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  2. A volumetric sensor for real-time 3D mapping and robot navigation

    NASA Astrophysics Data System (ADS)

    Fournier, Jonathan; Ricard, Benoit; Laurendeau, Denis

    2006-05-01

    The use of robots for (semi-) autonomous operations in complex terrains such as urban environments poses difficult mobility, mapping, and perception challenges. To be able to work efficiently, a robot should be provided with sensors and software such that it can perceive and analyze the world in 3D. Real-time 3D sensing and perception in this operational context are paramount. To address these challenges, DRDC Valcartier has developed over the past years a compact sensor that combines a wide baseline stereo camera and a laser scanner with a full 360 degree azimuth and 55 degree elevation field of view allowing the robot to view and manage overhang obstacles as well as obstacles at ground level. Sensing in 3D is common but to efficiently navigate and work in complex terrain, the robot should also perceive, decide and act in three dimensions. Therefore, 3D information should be preserved and exploited in all steps of the process. To achieve this, we use a multiresolution octree to store the acquired data, allowing mapping of large environments while keeping the representation compact and memory efficient. Ray tracing is used to build and update the 3D occupancy model. This model is used, via a temporary 2.5D map, for navigation, obstacle avoidance and efficient frontier-based exploration. This paper describes the volumetric sensor concept, describes its design features and presents an overview of the 3D software framework that allows 3D information persistency through all computation steps. Simulation and real-world experiments are presented at the end of the paper to demonstrate the key elements of our approach.

  3. Towards real-time 2D/3D registration for organ motion monitoring in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Gendrin, C.; Spoerk, J.; Bloch, C.; Pawiro, S. A.; Weber, C.; Figl, M.; Markelj, P.; Pernus, F.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2010-02-01

    Nowadays, radiation therapy systems incorporate kV imaging units which allow for the real-time acquisition of intra-fractional X-ray images of the patient with high details and contrast. An application of this technology is tumor motion monitoring during irradiation. For tumor tracking, implanted markers or position sensors are used which requires an intervention. 2D/3D intensity based registration is an alternative, non-invasive method but the procedure must be accelerate to the update rate of the device, which lies in the range of 5 Hz. In this paper we investigate fast CT to a single kV X-ray 2D/3D image registration using a new porcine reference phantom with seven implanted fiducial markers. Several parameters influencing the speed and accuracy of the registrations are investigated. First, four intensity based merit functions, namely Cross-Correlation, Rank Correlation, Mutual Information and Correlation Ratio, are compared. Secondly, wobbled splatting and ray casting rendering techniques are implemented on the GPU and the influence of each algorithm on the performance of 2D/3D registration is evaluated. Rendering times for a single DRR of 20 ms were achieved. Different thresholds of the CT volume were also examined for rendering to find the setting that achieves the best possible correspondence with the X-ray images. Fast registrations below 4 s became possible with an inplane accuracy down to 0.8 mm.

  4. Particle Filters and Occlusion Handling for Rigid 2D-3D Pose Tracking.

    PubMed

    Lee, Jehoon; Sandhu, Romeil; Tannenbaum, Allen

    2013-08-01

    In this paper, we address the problem of 2D-3D pose estimation. Specifically, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose (position and orientation) in 3D space. We revisit a joint 2D segmentation/3D pose estimation technique, and then extend the framework by incorporating a particle filter to robustly track the object in a challenging environment, and by developing an occlusion detection and handling scheme to continuously track the object in the presence of occlusions. In particular, we focus on partial occlusions that prevent the tracker from extracting an exact region properties of the object, which plays a pivotal role for region-based tracking methods in maintaining the track. To this end, a dynamical choice of how to invoke the objective functional is performed online based on the degree of dependencies between predictions and measurements of the system in accordance with the degree of occlusion and the variation of the object's pose. This scheme provides the robustness to deal with occlusions of an obstacle with different statistical properties from that of the object of interest. Experimental results demonstrate the practical applicability and robustness of the proposed method in several challenging scenarios. PMID:24058277

  5. 3D model-based detection and tracking for space autonomous and uncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Zhang, Yueqiang; Liu, Haibo

    2015-10-01

    In order to fully navigate using a vision sensor, a 3D edge model based detection and tracking technique was developed. Firstly, we proposed a target detection strategy over a sequence of several images from the 3D model to initialize the tracking. The overall purpose of such approach is to robustly match each image with the model views of the target. Thus we designed a line segment detection and matching method based on the multi-scale space technology. Experiments on real images showed that our method is highly robust under various image changes. Secondly, we proposed a method based on 3D particle filter (PF) coupled with M-estimation to track and estimate the pose of the target efficiently. In the proposed approach, a similarity observation model was designed according to a new distance function of line segments. Then, based on the tracking results of PF, the pose was optimized using M-estimation. Experiments indicated that the proposed method can effectively track and accurately estimate the pose of freely moving target in unconstrained environment.

  6. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    SciTech Connect

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  7. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible. PMID:26529730

  8. Eye Tracking to Explore the Impacts of Photorealistic 3d Representations in Pedstrian Navigation Performance

    NASA Astrophysics Data System (ADS)

    Dong, Weihua; Liao, Hua

    2016-06-01

    Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.

  9. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  10. Towards robust 3D visual tracking for motion compensation in beating heart surgery.

    PubMed

    Richa, Rogério; Bó, Antônio P L; Poignet, Philippe

    2011-06-01

    In the context of minimally invasive cardiac surgery, active vision-based motion compensation schemes have been proposed for mitigating problems related to physiological motion. However, robust and accurate visual tracking remains a difficult task. The purpose of this paper is to present a robust visual tracking method that estimates the 3D temporal and spatial deformation of the heart surface using stereo endoscopic images. The novelty is the combination of a visual tracking method based on a Thin-Plate Spline (TPS) model for representing the heart surface deformations with a temporal heart motion model based on a time-varying dual Fourier series for overcoming tracking disturbances or failures. The considerable improvements in tracking robustness facing specular reflections and occlusions are demonstrated through experiments using images of in vivo porcine and human beating hearts. PMID:21277821

  11. A real-time 3D scanning system for pavement distortion inspection

    NASA Astrophysics Data System (ADS)

    Li, Qingguang; Yao, Ming; Yao, Xun; Xu, Bugao

    2010-01-01

    Pavement distortions, such as rutting and shoving, are the common pavement distress problems that need to be inspected and repaired in a timely manner to ensure ride quality and traffic safety. This paper introduces a real-time, low-cost inspection system devoted to detecting these distress features using high-speed 3D transverse scanning techniques. The detection principle is the dynamic generation and characterization of the 3D pavement profile based on structured light triangulation. To improve the accuracy of the system, a multi-view coplanar scheme is employed in the calibration procedure so that more feature points can be used and distributed across the field of view of the camera. A sub-pixel line extraction method is applied for the laser stripe location, which includes filtering, edge detection and spline interpolation. The pavement transverse profile is then generated from the laser stripe curve and approximated by line segments. The second-order derivatives of the segment endpoints are used to identify the feature points of possible distortions. The system can output the real-time measurements and 3D visualization of rutting and shoving distress in a scanned pavement.

  12. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  13. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    PubMed

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time. PMID:26954841

  14. Alignment of 3D Building Models and TIR Video Sequences with Line Tracking

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, D.; Stilla, U.

    2014-11-01

    Thermal infrared imagery of urban areas became interesting for urban climate investigations and thermal building inspections. Using a flying platform such as UAV or a helicopter for the acquisition and combining the thermal data with the 3D building models via texturing delivers a valuable groundwork for large-area building inspections. However, such thermal textures are useful for further analysis if they are geometrically correctly extracted. This can be achieved with a good coregistrations between the 3D building models and thermal images, which cannot be achieved by direct georeferencing. Hence, this paper presents methodology for alignment of 3D building models and oblique TIR image sequences taken from a flying platform. In a single image line correspondences between model edges and image line segments are found using accumulator approach and based on these correspondences an optimal camera pose is calculated to ensure the best match between the projected model and the image structures. Among the sequence the linear features are tracked based on visibility prediction. The results of the proposed methodology are presented using a TIR image sequence taken from helicopter in a densely built-up urban area. The novelty of this work is given by employing the uncertainty of the 3D building models and by innovative tracking strategy based on a priori knowledge from the 3D building model and the visibility checking.

  15. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. PMID:26488641

  16. 2D array transducers for real-time 3D ultrasound guidance of interventional devices

    NASA Astrophysics Data System (ADS)

    Light, Edward D.; Smith, Stephen W.

    2009-02-01

    We describe catheter ring arrays for real-time 3D ultrasound guidance of devices such as vascular grafts, heart valves and vena cava filters. We have constructed several prototypes operating at 5 MHz and consisting of 54 elements using the W.L. Gore & Associates, Inc. micro-miniature ribbon cables. We have recently constructed a new transducer using a braided wiring technology from Precision Interconnect. This transducer consists of 54 elements at 4.8 MHz with pitch of 0.20 mm and typical -6 dB bandwidth of 22%. In all cases, the transducer and wiring assembly were integrated with an 11 French catheter of a Cook Medical deployment device for vena cava filters. Preliminary in vivo and in vitro testing is ongoing including simultaneous 3D ultrasound and x-ray fluoroscopy.

  17. 3D single-molecule tracking using one- and two-photon excitation microscopy

    NASA Astrophysics Data System (ADS)

    Liu, Cong; Perillo, Evan P.; Zhuang, Quincy; Huynh, Khang T.; Dunn, Andrew K.; Yeh, Hsin-Chih

    2014-03-01

    Three dimensional single-molecule tracking (3D-SMT) has revolutionized the way we study fundamental cellular processes. By analyzing the spatial trajectories of individual molecules (e.g. a receptor or a signaling molecule) in 3D space, one can discern the internalization or transport dynamics of these molecules, study the heterogeneity of subcellular structures, and elucidate the complex spatiotemporal regulation mechanisms. Sub-diffraction localization precision, sub-millisecond temporal resolution and tens-of-seconds observation period are the benchmarks of current 3D-SMT techniques. We have recently built two molecular tracking systems in our labs. The first system is a previously reported confocal tracking system, which we denote as the 1P-1E-4D (one-photon excitation, one excitation beam, and four fiber-coupled detectors) system. The second system is a whole new design that is based on two-photon excitation, which we denote as the 2P-4E-1D (two-photon excitation, four excitation beams, and only one detector) system. Here we compare these two systems based on Monte Carlo simulation of tracking a diffusing fluorescent molecule. Through our simulation, we have characterized the limitation of individual systems and optimized the system parameters such as magnification, z-plane separation, and feedback gains.

  18. Demonstration of digital hologram recording and 3D-scenes reconstruction in real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Kulakov, Mikhail N.; Kurbatova, Ekaterina A.; Molodtsov, Dmitriy Y.; Rodin, Vladislav G.

    2016-04-01

    Digital holography is technique that allows to reconstruct information about 2D-objects and 3D-scenes. This is achieved by registration of interference pattern formed by two beams: object and reference ones. Pattern registered by the digital camera is processed. This allows to obtain amplitude and phase of the object beam. Reconstruction of shape of the 2D objects and 3D-scenes can be obtained numerically (using computer) and optically (using spatial light modulators - SLMs). In this work camera Megaplus II ES11000 was used for digital holograms recording. The camera has 4008 × 2672 pixels with sizes of 9 μm × 9 μm. For hologram recording, 50 mW frequency-doubled Nd:YAG laser with wavelength 532 nm was used. Liquid crystal on silicon SLM HoloEye PLUTO VIS was used for optical reconstruction of digital holograms. SLM has 1920 × 1080 pixels with sizes of 8 μm × 8 μm. At objects reconstruction 10 mW He-Ne laser with wavelength 632.8 nm was used. Setups for digital holograms recording and their optical reconstruction with the SLM were combined as follows. MegaPlus Central Control Software allows to display registered frames by the camera with a little delay on the computer monitor. The SLM can work as additional monitor. In result displayed frames can be shown on the SLM display in near real-time. Thus recording and reconstruction of the 3D-scenes was obtained in real-time. Preliminary, resolution of displayed frames was chosen equaled to the SLM one. Quantity of the pixels was limited by the SLM resolution. Frame rate was limited by the camera one. This holographic video setup was applied without additional program implementations that would increase time delays between hologram recording and object reconstruction. The setup was demonstrated for reconstruction of 3D-scenes.

  19. A Real-time 3D Visualization of Global MHD Simulation for Space Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Murata, K.; Matsuoka, D.; Kubo, T.; Shimazu, H.; Tanaka, T.; Fujita, S.; Watari, S.; Miyachi, H.; Yamamoto, K.; Kimura, E.; Ishikura, S.

    2006-12-01

    Recently, many satellites for communication networks and scientific observation are launched in the vicinity of the Earth (geo-space). The electromagnetic (EM) environments around the spacecraft are always influenced by the solar wind blowing from the Sun and induced electromagnetic fields. They occasionally cause various troubles or damages, such as electrification and interference, to the spacecraft. It is important to forecast the geo-space EM environment as well as the ground weather forecasting. Owing to the recent remarkable progresses of super-computer technologies, numerical simulations have become powerful research methods in the solar-terrestrial physics. For the necessity of space weather forecasting, NICT (National Institute of Information and Communications Technology) has developed a real-time global MHD simulation system of solar wind-magnetosphere-ionosphere couplings, which has been performed on a super-computer SX-6. The real-time solar wind parameters from the ACE spacecraft at every one minute are adopted as boundary conditions for the simulation. Simulation results (2-D plots) are updated every 1 minute on a NICT website. However, 3D visualization of simulation results is indispensable to forecast space weather more accurately. In the present study, we develop a real-time 3D webcite for the global MHD simulations. The 3-D visualization results of simulation results are updated every 20 minutes in the following three formats: (1)Streamlines of magnetic field lines, (2)Isosurface of temperature in the magnetosphere and (3)Isoline of conductivity and orthogonal plane of potential in the ionosphere. For the present study, we developed a 3-D viewer application working on Internet Explorer browser (ActiveX) is implemented, which was developed on the AVS/Express. Numerical data are saved in the HDF5 format data files every 1 minute. Users can easily search, retrieve and plot past simulation results (3D visualization data and numerical data) by using

  20. Drogue tracking using 3D flash lidar for autonomous aerial refueling

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Stettner, Roger

    2011-06-01

    Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.

  1. A Comprehensive Software System for Interactive, Real-time, Visual 3D Deterministic and Stochastic Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Li, S.

    2002-05-01

    Taking advantage of the recent developments in groundwater modeling research and computer, image and graphics processing, and objected oriented programming technologies, Dr. Li and his research group have recently developed a comprehensive software system for unified deterministic and stochastic groundwater modeling. Characterized by a new real-time modeling paradigm and improved computational algorithms, the software simulates 3D unsteady flow and reactive transport in general groundwater formations subject to both systematic and "randomly" varying stresses and geological and chemical heterogeneity. The software system has following distinct features and capabilities: Interactive simulation and real time visualization and animation of flow in response to deterministic as well as stochastic stresses. Interactive, visual, and real time particle tracking, random walk, and reactive plume modeling in both systematically and randomly fluctuating flow. Interactive statistical inference, scattered data interpolation, regression, and ordinary and universal Kriging, conditional and unconditional simulation. Real-time, visual and parallel conditional flow and transport simulations. Interactive water and contaminant mass balance analysis and visual and real-time flux update. Interactive, visual, and real time monitoring of head and flux hydrographs and concentration breakthroughs. Real-time modeling and visualization of aquifer transition from confined to unconfined to partially de-saturated or completely dry and rewetting Simultaneous and embedded subscale models, automatic and real-time regional to local data extraction; Multiple subscale flow and transport models Real-time modeling of steady and transient vertical flow patterns on multiple arbitrarily-shaped cross-sections and simultaneous visualization of aquifer stratigraphy, properties, hydrological features (rivers, lakes, wetlands, wells, drains, surface seeps), and dynamically adjusted surface flooding area

  2. A Real-time, 3D Musculoskeletal Model for Dynamic Simulation of Arm Movements

    PubMed Central

    Chadwick, Edward K.; Blana, Dimitra; van den Bogert, Antonie J.; Kirsch, Robert F.

    2010-01-01

    Neuroprostheses can be used to restore movement of the upper limb in individuals with high-level spinal cord injury. Development and evaluation of command and control schemes for such devices typically requires real-time, “patient-in-the-loop” experimentation. A real-time, three-dimensional, musculoskeletal model of the upper limb has been developed for use in a simulation environment to allow such testing to be carried out non-invasively. The model provides real-time feedback of human arm dynamics that can be displayed to the user in a virtual reality environment. The model has a three degree-of-freedom gleno-humeral joint as well as elbow flexion/extension and pronation/supination, and contains 22 muscles of the shoulder and elbow divided into multiple elements. The model is able to run in real time on modest desktop hardware and demonstrates that a large-scale, 3D model can be made to run in real time. This is a prerequisite for a real-time, whole arm model that will form part of a dynamic arm simulator for use in the development, testing and user training of neural prosthesis systems. PMID:19272926

  3. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    PubMed

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors. PMID:24727389

  4. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking.

    PubMed

    Dettmer, Simon L; Keyser, Ulrich F; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces. PMID:24593372

  5. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-01

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  6. Local characterization of hindered Brownian motion by using digital video microscopy and 3D particle tracking

    SciTech Connect

    Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano

    2014-02-15

    In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of these methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.

  7. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction.

    PubMed

    Bao, Qian; Jiang, Chenglong; Lin, Yun; Tan, Weixian; Wang, Zhirui; Hong, Wen

    2016-01-01

    With a short linear array configured in the cross-track direction, downward looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) can obtain the 3-D image of an imaging scene. To improve the cross-track resolution, sparse recovery methods have been investigated in recent years. In the compressive sensing (CS) framework, the reconstruction performance depends on the property of measurement matrix. This paper concerns the technique to optimize the measurement matrix and deal with the mismatch problem of measurement matrix caused by the off-grid scatterers. In the model of cross-track reconstruction, the measurement matrix is mainly affected by the configuration of antenna phase centers (APC), thus, two mutual coherence based criteria are proposed to optimize the configuration of APCs. On the other hand, to compensate the mismatch problem of the measurement matrix, the sparse Bayesian inference based method is introduced into the cross-track reconstruction by jointly estimate the scatterers and the off-grid error. Experiments demonstrate the performance of the proposed APCs' configuration schemes and the proposed cross-track reconstruction method. PMID:27556471

  8. Fast parallel interferometric 3D tracking of numerous optically trapped particles and their hydrodynamic interaction.

    PubMed

    Ruh, Dominic; Tränkle, Benjamin; Rohrbach, Alexander

    2011-10-24

    Multi-dimensional, correlated particle tracking is a key technology to reveal dynamic processes in living and synthetic soft matter systems. In this paper we present a new method for tracking micron-sized beads in parallel and in all three dimensions - faster and more precise than existing techniques. Using an acousto-optic deflector and two quadrant-photo-diodes, we can track numerous optically trapped beads at up to tens of kHz with a precision of a few nanometers by back-focal plane interferometry. By time-multiplexing the laser focus, we can calibrate individually all traps and all tracking signals in a few seconds and in 3D. We show 3D histograms and calibration constants for nine beads in a quadratic arrangement, although trapping and tracking is easily possible for more beads also in arbitrary 2D arrangements. As an application, we investigate the hydrodynamic coupling and diffusion anomalies of spheres trapped in a 3 × 3 arrangement. PMID:22109012

  9. A 3D front tracking method on a CPU/GPU system

    SciTech Connect

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  10. Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation.

    PubMed

    Yang, L; Wang, J; Ando, T; Kubota, A; Yamashita, H; Sakuma, I; Chiba, T; Kobayashi, E

    2015-03-01

    This work introduces a self-contained framework for endoscopic camera tracking by combining 3D ultrasonography with endoscopy. The approach can be readily incorporated into surgical workflows without installing external tracking devices. By fusing the ultrasound-constructed scene geometry with endoscopic vision, this integrated approach addresses issues related to initialization, scale ambiguity, and interest point inadequacy that may be faced by conventional vision-based approaches when applied to fetoscopic procedures. Vision-based pose estimations were demonstrated by phantom and ex vivo monkey placenta imaging. The potential contribution of this method may extend beyond fetoscopic procedures to include general augmented reality applications in minimally invasive procedures. PMID:25263644

  11. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  12. 3D silicon sensors with variable electrode depth for radiation hard high resolution particle tracking

    NASA Astrophysics Data System (ADS)

    Da Vià, C.; Borri, M.; Dalla Betta, G.; Haughton, I.; Hasi, J.; Kenney, C.; Povoli, M.; Mendicino, R.

    2015-04-01

    3D sensors, with electrodes micro-processed inside the silicon bulk using Micro-Electro-Mechanical System (MEMS) technology, were industrialized in 2012 and were installed in the first detector upgrade at the LHC, the ATLAS IBL in 2014. They are the radiation hardest sensors ever made. A new idea is now being explored to enhance the three-dimensional nature of 3D sensors by processing collecting electrodes at different depths inside the silicon bulk. This technique uses the electric field strength to suppress the charge collection effectiveness of the regions outside the p-n electrodes' overlap. Evidence of this property is supported by test beam data of irradiated and non-irradiated devices bump-bonded with pixel readout electronics and simulations. Applications include High-Luminosity Tracking in the high multiplicity LHC forward regions. This paper will describe the technical advantages of this idea and the tracking application rationale.

  13. Miniature stereoscopic video system provides real-time 3D registration and image fusion for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav

    2007-02-01

    Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.

  14. Registration of 2D cardiac images to real-time 3D ultrasound volumes for 3D stress echocardiography

    NASA Astrophysics Data System (ADS)

    Leung, K. Y. Esther; van Stralen, Marijn; Voormolen, Marco M.; van Burken, Gerard; Nemes, Attila; ten Cate, Folkert J.; Geleijnse, Marcel L.; de Jong, Nico; van der Steen, Antonius F. W.; Reiber, Johan H. C.; Bosch, Johan G.

    2006-03-01

    Three-dimensional (3D) stress echocardiography is a novel technique for diagnosing cardiac dysfunction, by comparing wall motion of the left ventricle under different stages of stress. For quantitative comparison of this motion, it is essential to register the ultrasound data. We propose an intensity based rigid registration method to retrieve two-dimensional (2D) four-chamber (4C), two-chamber, and short-axis planes from the 3D data set acquired in the stress stage, using manually selected 2D planes in the rest stage as reference. The algorithm uses the Nelder-Mead simplex optimization to find the optimal transformation of one uniform scaling, three rotation, and three translation parameters. We compared registration using the SAD, SSD, and NCC metrics, performed on four resolution levels of a Gaussian pyramid. The registration's effectiveness was assessed by comparing the 3D positions of the registered apex and mitral valve midpoints and 4C direction with the manually selected results. The registration was tested on data from 20 patients. Best results were found using the NCC metric on data downsampled with factor two: mean registration errors were 8.1mm, 5.4mm, and 8.0° in the apex position, mitral valve position, and 4C direction respectively. The errors were close to the interobserver (7.1mm, 3.8mm, 7.4°) and intraobserver variability (5.2mm, 3.3mm, 7.0°), and better than the error before registration (9.4mm, 9.0mm, 9.9°). We demonstrated that the registration algorithm visually and quantitatively improves the alignment of rest and stress data sets, performing similar to manual alignment. This will improve automated analysis in 3D stress echocardiography.

  15. Implementation of real-time 3D image communication system using stereoscopic imaging and display scheme

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Chul; Kim, Dong-Kyu; Ko, Jung-Hwan; Kim, Eun-Soo

    2004-11-01

    In this paper, a new stereoscopic 3D imaging communication system for real-time teleconferencing application is implemented by using IEEE 1394 digital cameras, Intel Xeon server computer system and Microsoft"s DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate. In the proposed system, two-view images are captured by using two digital cameras and processed in the Intel Xeon server computer system. And then, disparity data is extracted from them and transmitted to the client system with the left image through an information network and in the recipient two-view images are reconstructed and displayed on the stereoscopic 3D display system. The program for controlling the overall system is developed using the Microsoft DirectShow SDK. From some experimental results, it is found that the proposed system can display stereoscopic images in real-time with a full-color of 16 bits and a frame rate of 15fps.

  16. Real-Time 3D Magnetic Resonance Imaging of the Pharyngeal Airway in Sleep Apnea

    PubMed Central

    Kim, Yoon-Chul; Lebel, R. Marc; Wu, Ziyue; Davidson Ward, Sally L.; Khoo, Michael C.K.; Nayak, Krishna S.

    2014-01-01

    Purpose To investigate the feasibility of real-time 3D magnetic resonance imaging (MRI) with simultaneous recording of physiological signals for identifying sites of airway obstruction during natural sleep in pediatric patients with sleep-disordered breathing. Methods Experiments were performed using a three-dimensional Fourier transformation (3DFT) gradient echo sequence with prospective undersampling based on golden-angle radial spokes, and L1-norm regularized iterative self-consistent parallel imaging (L1-SPIRiT) reconstruction. This technique was demonstrated in three healthy adult volunteers and five pediatric patients with sleep-disordered breathing. External airway occlusion was used to induce partial collapse of the upper airway on inspiration and test the effectiveness of the proposed imaging method. Apneic events were identified using information available from synchronized recording of mask pressure and respiratory effort. Results Acceptable image quality was obtained in seven of eight subjects. Temporary airway collapse induced via inspiratory loading was successfully imaged in all three volunteers, with average airway volume reductions of 63.3%, 52.5%, and 33.7%. Central apneic events and associated airway narrowing/closure were identified in two pediatric patients. During central apneic events, airway obstruction was observed in the retropalatal region in one pediatric patient. Conclusion Real-time 3D MRI of the pharyngeal airway with synchronized recording of physiological signals is feasible and may provide valuable information about the sites and nature of airway narrowing/collapse during natural sleep. PMID:23788203

  17. Integrated endoscope for real-time 3D ultrasound imaging and hyperthermia: feasibility study.

    PubMed

    Pua, Eric C; Qiu, Yupeng; Smith, S W

    2007-01-01

    The goal of this research is to determine the feasibility of using a single endoscopic probe for the combined purpose of real-time 3D (RT3D) ultrasound imaging of a target organ and the delivery of ultrasound therapy to facilitate the absorption of compounds for cancer treatment. Recent research in ultrasound therapy has shown that ultrasound-mediated drug delivery improves absorption of treatments for prostate, cervical and esophageal cancer. The ability to combine ultrasound hyperthermia and 3D imaging could improve visualization and targeting of cancerous tissues. In this study, numerical modeling and experimental measurements were developed to determine the feasibility of combined therapy and imaging with a 1 cm diameter endoscopic RT3D probe with 504 transmitters and 252 receive channels. This device operates at 5 MHz and has a 6.3 mm x 6.3 mm aperture to produce real time 3D pyramidal scans of 60-120 degrees incorporating 64 x 64 = 4096 image lines at 30 volumes/sec interleaved with a 3D steerable therapy beam. A finite-element mesh was constructed with over 128,000 elements in LS-DYNA to simulate the induced temperature rise from our transducer with a 3 cm deep focus in tissue. Quarter-symmetry of the transducer was used to reduce mesh size and computation time. Based on intensity values calculated in Field II using the transducer's array geometry, a minimum I(SPTA) of 3.6 W/cm2 is required from our endoscope probe in order to induce a temperature rise of 4 degrees C within five minutes. Experimental measurements of the array's power output capabilities were conducted using a PVDF hydrophone placed 3 cm away from the face of the transducer in a watertank. Using a PDA14 Signatec data acquisition board to capture full volumes of transmitted ultrasound data, it was determined that the probe can presently maintain intensity values up to 2.4 W/cm2 over indefinite times for therapeutic applications combined with intermittent 3D scanning to maintain targeting

  18. Real-time 3D flight guidance with terrain for the X-38

    NASA Astrophysics Data System (ADS)

    Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Lowrey, William H.

    1999-07-01

    The NASA Johnson Space Center is developing a series of prototype flight test vehicles leading to a functional Crew Return Vehicle (CRV). The development of these prototype vehicles, designated as the X-38 program, will demonstrate which technologies are needed to build an inexpensive, safe, and reliable spacecraft that can rapidly return astronauts from onboard the International Space Station (ISS) to earth. These vehicles are being built using an incremental approach and where appropriate, are taking advantage of advanced technologies that may help improve safety, decrease development costs, reduce development time, as well as outperform traditional technologies. This paper discusses the creation of real-time 3-D displays for flight guidance and situation awareness for the X-38 program. These displays feature the incorporation of real-time GPS position data, three-dimensional terrain models, heads-up display (HUD), and landing zone designations. The X-38 crew return vehicle is unique in several ways including that it does not afford the pilot a forward view through a wind screen, and utilizes a parafoil in the final flight phase. As a result, on-board displays to enhance situation awareness face challenges. While real-time flight visualization systems limited to running on high-end workstations have been created, only flight-rated Windows are available as platforms for the X-38 3-D displays. The system has been developed to meet this constraint, as well as those of cost, ease-of-use, reliability and extensibility. Because the X-38 is unpowered, and might be required to enter its landing phase from anywhere on orbit, the display must show, in real-time, and in 3 dimensions, the terrain, ideal and actual glide path, recommended landing areas, as well as typical heads-up information. Maps, such as aeronautical charts, and satellite imagery are optionally overlaid on the 3-D terrain model to provide additional situation awareness. We will present a component

  19. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  20. Laser 3-D measuring system and real-time visual feedback for teaching and correcting breathing

    NASA Astrophysics Data System (ADS)

    Povšič, Klemen; Fležar, Matjaž; Možina, Janez; Jezeršek, Matija

    2012-03-01

    We present a novel method for real-time 3-D body-shape measurement during breathing based on the laser multiple-line triangulation principle. The laser projector illuminates the measured surface with a pattern of 33 equally inclined light planes. Simultaneously, the camera records the distorted light pattern from a different viewpoint. The acquired images are transferred to a personal computer, where the 3-D surface reconstruction, shape analysis, and display are performed in real time. The measured surface displacements are displayed with a color palette, which enables visual feedback to the patient while breathing is being taught. The measuring range is approximately 400×600×500 mm in width, height, and depth, respectively, and the accuracy of the calibrated apparatus is +/-0.7 mm. The system was evaluated by means of its capability to distinguish between different breathing patterns. The accuracy of the measured volumes of chest-wall deformation during breathing was verified using standard methods of volume measurements. The results show that the presented 3-D measuring system with visual feedback has great potential as a diagnostic and training assistance tool when monitoring and evaluating the breathing pattern, because it offers a simple and effective method of graphical communication with the patient.

  1. Web GIS in practice V: 3-D interactive and real-time mapping in Second Life.

    PubMed

    Boulos, Maged N Kamel; Burden, David

    2007-01-01

    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275

  2. Web GIS in practice V: 3-D interactive and real-time mapping in Second Life

    PubMed Central

    Boulos, Maged N Kamel; Burden, David

    2007-01-01

    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275

  3. Management of three-dimensional intrafraction motion through real-time DMLC tracking.

    PubMed

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-05-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion. PMID

  4. Simultaneous real-time 3D photoacoustic tomography and EEG for neurovascular coupling study in an animal model of epilepsy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Xiao, Jiaying; Jiang, Huabei

    2014-08-01

    Objective. Neurovascular coupling in epilepsy is poorly understood; its study requires simultaneous monitoring of hemodynamic changes and neural activity in the brain. Approach. Here for the first time we present a combined real-time 3D photoacoustic tomography (PAT) and electrophysiology/electroencephalography (EEG) system for the study of neurovascular coupling in epilepsy, whose ability was demonstrated with a pentylenetetrazol (PTZ) induced generalized seizure model in rats. Two groups of experiments were carried out with different wavelengths to detect the changes of oxy-hemoglobin (HbO2) and deoxy-hemoglobin (HbR) signals in the rat brain. We extracted the average PAT signals of the superior sagittal sinus (SSS), and compared them with the EEG signal. Main results. Results showed that the seizure process can be divided into three stages. A ‘dip’ lasting for 1-2 min in the first stage and the following hyperfusion in the second stage were observed. The HbO2 signal and the HbR signal were generally negatively correlated. The change of blood flow was also estimated. All the acquired results here were in accordance with other published results. Significance. Compared to other existing functional neuroimaging tools, the method proposed here enables reliable tracking of hemodynamic signal with both high spatial and high temporal resolution in 3D, so it is more suitable for neurovascular coupling study of epilepsy.

  5. Tornado-like appearance of spontaneous echo contrast assessed by real-time 3D transesophageal echocardiography.

    PubMed

    Otani, Kyoko; Takeuchi, Masaaki; Nakai, Hiromi; Kaku, Kyoko; Haruki, Nobuhiko; Yoshitani, Hidetoshi; Otsuji, Yutaka

    2009-06-01

    We report a case showing that real-time 3D transesophageal echocardiography provides unique information about the dynamic nature of spontaneous echo contrast (SEC) in 3D space and has the potential to provide better understanding of SEC. PMID:27278229

  6. 3D shape tracking of minimally invasive medical instruments using optical frequency domain reflectometry

    NASA Astrophysics Data System (ADS)

    Parent, Francois; Kanti Mandal, Koushik; Loranger, Sebastien; Watanabe Fernandes, Eric Hideki; Kashyap, Raman; Kadoury, Samuel

    2016-03-01

    We propose here a new alternative to provide real-time device tracking during minimally invasive interventions using a truly-distributed strain sensor based on optical frequency domain reflectometry (OFDR) in optical fibers. The guidance of minimally invasive medical instruments such as needles or catheters (ex. by adding a piezoelectric coating) has been the focus of extensive research in the past decades. Real-time tracking of instruments in medical interventions facilitates image guidance and helps the user to reach a pre-localized target more precisely. Image-guided systems using ultrasound imaging and shape sensors based on fiber Bragg gratings (FBG)-embedded optical fibers can provide retroactive feedback to the user in order to reach the targeted areas with even more precision. However, ultrasound imaging with electro-magnetic tracking cannot be used in the magnetic resonance imaging (MRI) suite, while shape sensors based on FBG embedded in optical fibers provides discrete values of the instrument position, which requires approximations to be made to evaluate its global shape. This is why a truly-distributed strain sensor based on OFDR could enhance the tracking accuracy. In both cases, since the strain is proportional to the radius of curvature of the fiber, a strain sensor can provide the three-dimensional shape of medical instruments by simply inserting fibers inside the devices. To faithfully follow the shape of the needle in the tracking frame, 3 fibers glued in a specific geometry are used, providing 3 degrees of freedom along the fiber. Near real-time tracking of medical instruments is thus obtained offering clear advantages for clinical monitoring in remotely controlled catheter or needle guidance. We present results demonstrating the promising aspects of this approach as well the limitations of using the OFDR technique.

  7. 3D motion tracking of the heart using Harmonic Phase (HARP) isosurfaces

    NASA Astrophysics Data System (ADS)

    Soliman, Abraam S.; Osman, Nael F.

    2010-03-01

    Tags are non-invasive features induced in the heart muscle that enable the tracking of heart motion. Each tag line, in fact, corresponds to a 3D tag surface that deforms with the heart muscle during the cardiac cycle. Tracking of tag surfaces deformation is useful for the analysis of left ventricular motion. Cardiac material markers (Kerwin et al, MIA, 1997) can be obtained from the intersections of orthogonal surfaces which can be reconstructed from short- and long-axis tagged images. The proposed method uses Harmonic Phase (HARP) method for tracking tag lines corresponding to a specific harmonic phase value and then the reconstruction of grid tag surfaces is achieved by a Delaunay triangulation-based interpolation for sparse tag points. Having three different tag orientations from short- and long-axis images, the proposed method showed the deformation of 3D tag surfaces during the cardiac cycle. Previous work on tag surface reconstruction was restricted for the "dark" tag lines; however, the use of HARP as proposed enables the reconstruction of isosurfaces based on their harmonic phase values. The use of HARP, also, provides a fast and accurate way for tag lines identification and tracking, and hence, generating the surfaces.

  8. Realistic 3D Terrain Roaming and Real-Time Flight Simulation

    NASA Astrophysics Data System (ADS)

    Que, Xiang; Liu, Gang; He, Zhenwen; Qi, Guang

    2014-12-01

    This paper presents an integrate method, which can provide access to current status and the dynamic visible scanning topography, to enhance the interactive during the terrain roaming and real-time flight simulation. A digital elevation model and digital ortho-photo map data integrated algorithm is proposed as the base algorithm for our approach to build a realistic 3D terrain scene. A new technique with help of render to texture and head of display for generating the navigation pane is used. In the flight simulating, in order to eliminate flying "jump", we employs the multidimensional linear interpolation method to adjust the camera parameters dynamically and steadily. Meanwhile, based on the principle of scanning laser imaging, we draw pseudo color figures by scanning topography in different directions according to the real-time flying status. Simulation results demonstrate that the proposed algorithm is prospective for applications and the method can improve the effect and enhance dynamic interaction during the real-time flight.

  9. Meanie3D - a mean-shift based, multivariate, multi-scale clustering and tracking algorithm

    NASA Astrophysics Data System (ADS)

    Simon, Jürgen-Lorenz; Malte, Diederich; Silke, Troemel

    2014-05-01

    Project OASE is the one of 5 work groups at the HErZ (Hans Ertel Centre for Weather Research), an ongoing effort by the German weather service (DWD) to further research at Universities concerning weather prediction. The goal of project OASE is to gain an object-based perspective on convective events by identifying them early in the onset of convective initiation and follow then through the entire lifecycle. The ability to follow objects in this fashion requires new ways of object definition and tracking, which incorporate all the available data sets of interest, such as Satellite imagery, weather Radar or lightning counts. The Meanie3D algorithm provides the necessary tool for this purpose. Core features of this new approach to clustering (object identification) and tracking are the ability to identify objects using the mean-shift algorithm applied to a multitude of variables (multivariate), as well as the ability to detect objects on various scales (multi-scale) using elements of Scale-Space theory. The algorithm works in 2D as well as 3D without modifications. It is an extension of a method well known from the field of computer vision and image processing, which has been tailored to serve the needs of the meteorological community. In spite of the special application to be demonstrated here (like convective initiation), the algorithm is easily tailored to provide clustering and tracking for a wide class of data sets and problems. In this talk, the demonstration is carried out on two of the OASE group's own composite sets. One is a 2D nationwide composite of Germany including C-Band Radar (2D) and Satellite information, the other a 3D local composite of the Bonn/Jülich area containing a high-resolution 3D X-Band Radar composite.

  10. High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories

    PubMed Central

    Su, Ting-Wei; Xue, Liang; Ozcan, Aydogan

    2012-01-01

    Dynamic tracking of human sperms across a large volume is a challenging task. To provide a high-throughput solution to this important need, here we describe a lensfree on-chip imaging technique that can track the three-dimensional (3D) trajectories of > 1,500 individual human sperms within an observation volume of approximately 8–17 mm3. This computational imaging platform relies on holographic lensfree shadows of sperms that are simultaneously acquired at two different wavelengths, emanating from two partially-coherent sources that are placed at 45° with respect to each other. This multiangle and multicolor illumination scheme permits us to dynamically track the 3D motion of human sperms across a field-of-view of > 17 mm2 and depth-of-field of approximately 0.5–1 mm with submicron positioning accuracy. The large statistics provided by this lensfree imaging platform revealed that only approximately 4–5% of the motile human sperms swim along well-defined helices and that this percentage can be significantly suppressed under seminal plasma. Furthermore, among these observed helical human sperms, a significant majority (approximately 90%) preferred right-handed helices over left-handed ones, with a helix radius of approximately 0.5–3 μm, a helical rotation speed of approximately 3–20 rotations/s and a linear speed of approximately 20–100 μm/s. This high-throughput 3D imaging platform could in general be quite valuable for observing the statistical swimming patterns of various other microorganisms, leading to new insights in their 3D motion and the underlying biophysics. PMID:22988076

  11. Automated 3-D tracking of centrosomes in sequences of confocal image stacks.

    PubMed

    Kerekes, Ryan A; Gleason, Shaun S; Trivedi, Niraj; Solecki, David J

    2009-01-01

    In order to facilitate the study of neuron migration, we propose a method for 3-D detection and tracking of centrosomes in time-lapse confocal image stacks of live neuron cells. We combine Laplacian-based blob detection, adaptive thresholding, and the extraction of scale and roundness features to find centrosome-like objects in each frame. We link these detections using the joint probabilistic data association filter (JPDAF) tracking algorithm with a Newtonian state-space model tailored to the motion characteristics of centrosomes in live neurons. We apply our algorithm to image sequences containing multiple cells, some of which had been treated with motion-inhibiting drugs. We provide qualitative results and quantitative comparisons to manual segmentation and tracking results showing that our average motion estimates agree to within 13% of those computed manually by neurobiologists. PMID:19964725

  12. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  13. A brain-computer interface method combined with eye tracking for 3D interaction.

    PubMed

    Lee, Eui Chul; Woo, Jin Cheol; Kim, Jong Hwa; Whang, Mincheol; Park, Kang Ryoung

    2010-07-15

    With the recent increase in the number of three-dimensional (3D) applications, the need for interfaces to these applications has increased. Although the eye tracking method has been widely used as an interaction interface for hand-disabled persons, this approach cannot be used for depth directional navigation. To solve this problem, we propose a new brain computer interface (BCI) method in which the BCI and eye tracking are combined to analyze depth navigation, including selection and two-dimensional (2D) gaze direction, respectively. The proposed method is novel in the following five ways compared to previous works. First, a device to measure both the gaze direction and an electroencephalogram (EEG) pattern is proposed with the sensors needed to measure the EEG attached to a head-mounted eye tracking device. Second, the reliability of the BCI interface is verified by demonstrating that there is no difference between the real and the imaginary movements for the same work in terms of the EEG power spectrum. Third, depth control for the 3D interaction interface is implemented by an imaginary arm reaching movement. Fourth, a selection method is implemented by an imaginary hand grabbing movement. Finally, for the independent operation of gazing and the BCI, a mode selection method is proposed that measures a user's concentration by analyzing the pupil accommodation speed, which is not affected by the operation of gazing and the BCI. According to experimental results, we confirmed the feasibility of the proposed 3D interaction method using eye tracking and a BCI. PMID:20580646

  14. Real-time 3D curved needle segmentation using combined B-mode and power Doppler ultrasound.

    PubMed

    Greer, Joseph D; Adebar, Troy K; Hwang, Gloria L; Okamura, Allison M

    2014-01-01

    This paper presents a real-time segmentation method for curved needles in biological tissue based on analysis of B-mode and power Doppler images from a tracked 2D ultrasound transducer. Mechanical vibration induced by an external voice coil results in a Doppler response along the needle shaft, which is centered around the needle section in the ultrasound image. First, B-mode image analysis is performed within regions of interest indicated by the Doppler response to create a segmentation of the needle section in the ultrasound image. Next, each needle section is decomposed into a sequence of points and transformed into a global coordinate system using the tracked transducer pose. Finally, the 3D shape is reconstructed from these points. The results of this method differ from manual segmentation by 0.71 ± 0.55 mm in needle tip location and 0.38 ± 0.27 mm along the needle shaft. This method is also fast, taking 5-10 ms to run on a standard PC, and is particularly advantageous in robotic needle steering, which involves thin, curved needles with poor echogenicity. PMID:25485402

  15. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill. PMID:26020444

  16. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    SciTech Connect

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  17. Using natural versus artificial stimuli to perform calibration for 3D gaze tracking

    NASA Astrophysics Data System (ADS)

    Maggia, Christophe; Guyader, Nathalie; Guérin-Dugué, Anne

    2013-03-01

    The presented study tests which type of stereoscopic image, natural or artificial, is more adapted to perform efficient and reliable calibration in order to track the gaze of observers in 3D space using classical 2D eye tracker. We measured the horizontal disparities, i.e. the difference between the x coordinates of the two eyes obtained using a 2D eye tracker. This disparity was recorded for each observer and for several target positions he had to fixate. Target positions were equally distributed in the 3D space, some on the screen (with a null disparity), some behind the screen (uncrossed disparity) and others in front of the screen (crossed disparity). We tested different regression models (linear and non linear) to explain either the true disparity or the depth with the measured disparity. Models were tested and compared on their prediction error for new targets at new positions. First of all, we found that we obtained more reliable disparities measures when using natural stereoscopic images rather than artificial. Second, we found that overall a non-linear model was more efficient. Finally, we discuss the fact that our results were observer dependent, with variability's between the observer's behavior when looking at 3D stimuli. Because of this variability, we proposed to compute observer specific model to accurately predict their gaze position when exploring 3D stimuli.

  18. Microfabricated collagen tracks facilitate single cell metastatic invasion in 3D.

    PubMed

    Kraning-Rush, Casey M; Carey, Shawn P; Lampi, Marsha C; Reinhart-King, Cynthia A

    2013-03-01

    While the mechanisms employed by metastatic cancer cells to migrate remain poorly understood, it has been widely accepted that metastatic cancer cells can invade the tumor stroma by degrading the extracellular matrix (ECM) with matrix metalloproteinases (MMPs). Although MMP inhibitors showed early promise in preventing metastasis in animal models, they have largely failed clinically. Recently, studies have shown that some cancer cells can use proteolysis to mechanically rearrange their ECM to form tube-like "microtracks" which other cells can follow without using MMPs themselves. We speculate that this mode of migration in the secondary cells may be one example of migration which can occur without endogenous protease activity in the secondary cells. Here we present a technique to study this migration in a 3D, collagen-based environment which mimics the size and topography of the tracks produced by proteolytically active cancer cells. Using time-lapse phase-contrast microscopy, we find that these microtracks permit the rapid and persistent migration of noninvasive MCF10A mammary epithelial cells, which are unable to otherwise migrate in 3D collagen. Additionally, while highly metastatic MDAMB231 breast cancer cells are able to invade a 3D collagen matrix, seeding within the patterned microtracks induced significantly increased cell migration speed, which was not decreased by pharmacological MMP inhibition. Together, these data suggest that microtracks within a 3D ECM may facilitate the migration of cells in an MMP-independent fashion, and may reveal novel insight into the clinical challenges facing MMP inhibitors. PMID:23388698

  19. Detection, 3-D positioning, and sizing of small pore defects using digital radiography and tracking

    NASA Astrophysics Data System (ADS)

    Lindgren, Erik

    2014-12-01

    This article presents an algorithm that handles the detection, positioning, and sizing of submillimeter-sized pores in welds using radiographic inspection and tracking. The possibility to detect, position, and size pores which have a low contrast-to-noise ratio increases the value of the nondestructive evaluation of welds by facilitating fatigue life predictions with lower uncertainty. In this article, a multiple hypothesis tracker with an extended Kalman filter is used to track an unknown number of pore indications in a sequence of radiographs as an object is rotated. Each pore is not required to be detected in all radiographs. In addition, in the tracking step, three-dimensional (3-D) positions of pore defects are calculated. To optimize, set up, and pre-evaluate the algorithm, the article explores a design of experimental approach in combination with synthetic radiographs of titanium laser welds containing pore defects. The pre-evaluation on synthetic radiographs at industrially reasonable contrast-to-noise ratios indicate less than 1% false detection rates at high detection rates and less than 0.1 mm of positioning errors for more than 90% of the pores. A comparison between experimental results of the presented algorithm and a computerized tomography reference measurement shows qualitatively good agreement in the 3-D positions of approximately 0.1-mm diameter pores in 5-mm-thick Ti-6242.

  20. METHODS FOR USING 3-D ULTRASOUND SPECKLE TRACKING IN BIAXIAL MECHANICAL TESTING OF BIOLOGICAL TISSUE SAMPLES

    PubMed Central

    Yap, Choon Hwai; Park, Dae Woo; Dutta, Debaditya; Simon, Marc; Kim, Kang

    2014-01-01

    Being multilayered and anisotropic, biological tissues such as cardiac and arterial walls are structurally complex, making full assessment and understanding of their mechanical behavior challenging. Current standard mechanical testing uses surface markers to track tissue deformations and does not provide deformation data below the surface. In the study described here, we found that combining mechanical testing with 3-D ultrasound speckle tracking could overcome this limitation. Rat myocardium was tested with a biaxial tester and was concurrently scanned with high-frequency ultrasound in three dimensions. The strain energy function was computed from stresses and strains using an iterative non-linear curve-fitting algorithm. Because the strain energy function consists of terms for the base matrix and for embedded fibers, spatially varying fiber orientation was also computed by curve fitting. Using finite-element simulations, we first validated the accuracy of the non-linear curve-fitting algorithm. Next, we compared experimentally measured rat myocardium strain energy function values with those in the literature and found a matching order of magnitude. Finally, we retained samples after the experiments for fiber orientation quantification using histology and found that the results satisfactorily matched those computed in the experiments. We conclude that 3-D ultrasound speckle tracking can be a useful addition to traditional mechanical testing of biological tissues and may provide the benefit of enabling fiber orientation computation. PMID:25616585

  1. Ultrasonic diaphragm tracking for cardiac interventional navigation on 3D motion compensated static roadmaps

    NASA Astrophysics Data System (ADS)

    Timinger, Holger; Kruger, Sascha; Dietmayer, Klaus; Borgert, Joern

    2005-04-01

    In this paper, a novel approach to cardiac interventional navigation on 3D motion-compensated static roadmaps is presented. Current coronary interventions, e.g. percutaneous transluminal coronary angioplasties, are performed using 2D X-ray fluoroscopy. This comes along with well-known drawbacks like radiation exposure, use of contrast agent, and limited visualization, e.g. overlap and foreshortening, due to projection imaging. In the presented approach, the interventional device, i.e. the catheter, is tracked using an electromagnetic tracking system (MTS). Therefore, the catheters position is mapped into a static 3D image of the volume of interest (VOI) by means of an affine registration. In order to compensate for respiratory motion of the catheter with respect to the static image, a parameterized affine motion model is used which is driven by a respiratory sensor signal. This signal is derived from ultrasonic diaphragm tracking. The motion compensation for the heartbeat is done using ECG-gating. The methods are validated using a heart- and diaphragm-phantom. The mean displacement of the catheter due to the simulated organ motion decreases from approximately 9 mm to 1.3 mm. This result indicates that the proposed method is able to reconstruct the catheter position within the VOI accurately and that it can help to overcome drawbacks of current interventional procedures.

  2. IPS - a System for Real-Time Navigation and 3d Modeling

    NASA Astrophysics Data System (ADS)

    Grießbach, D.; Baumbach, D.; Börner, A.; Buder, M.; Ernst, I.; Funk, E.; Wohlfeil, J.; Zuev, S.

    2012-07-01

    fdaReliable navigation and 3D modeling is a necessary requirement for any autonomous system in real world scenarios. German Aerospace Center (DLR) developed a system providing precise information about local position and orientation of a mobile platform as well as three-dimensional information about its environment in real-time. This system, called Integral Positioning System (IPS) can be applied for indoor environments and outdoor environments. To achieve high precision, reliability, integrity and availability a multi-sensor approach was chosen. The important role of sensor data synchronization, system calibration and spatial referencing is emphasized because the data from several sensors has to be fused using a Kalman filter. A hardware operating system (HW-OS) is presented, that facilitates the low-level integration of different interfaces. The benefit of this approach is an increased precision of synchronization at the expense of additional engineering costs. It will be shown that the additional effort is leveraged by the new design concept since the HW-OS methodology allows a proven, flexible and fast design process, a high re-usability of common components and consequently a higher reliability within the low-level sensor fusion. Another main focus of the paper is on IPS software. The DLR developed, implemented and tested a flexible and extensible software concept for data grabbing, efficient data handling, data preprocessing (e.g. image rectification) being essential for thematic data processing. Standard outputs of IPS are a trajectory of the moving platform and a high density 3D point cloud of the current environment. This information is provided in real-time. Based on these results, information processing on more abstract levels can be executed.

  3. 3D measurement of the position of gold particles via evanescent digital holographic particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Satake, Shin-ichi; Unno, Noriyuki; Nakata, Shuichiro; Taniguchi, Jun

    2016-08-01

    A new technique based on digital holography and evanescent waves was developed for 3D measurements of the position of gold nanoparticles in water. In this technique, an intensity profile is taken from a holographic image of a gold particle. To detect the position of the gold particle with high accuracy, its holographic image is recorded on a nanosized step made of MEXFLON, which has a refractive index close to that of water, and the position of the particle is reconstructed by means of digital holography. The height of the nanosized step was measured by using a profilometer and the digitally reconstructed height of the glass substrate had good agreement with the measured value. Furthermore, this method can be used to accurately track the 3D position of a gold particle in water.

  4. Real-time multicamera system for measurement of 3D coordinates by pattern projection

    NASA Astrophysics Data System (ADS)

    Sainov, Ventseslav; Stoykova, Elena; Harizanova, Jana

    2007-06-01

    The report describes a real-time pattern-projection system for measurement of 3D coordinates with simultaneous illumination and recording of four phase-shifted fringe patterns which are projected at four different wavelengths and captured by four synchronized CCD cameras. This technical solution overcomes the main drawback of the temporal phase-shifting profilometry in which the pattern acquisition is made successively in time. The work considers the use of a sinusoidal phase grating as a projection element which is made by analysis of the frequency content of the projected fringes in the Fresnel diffraction zone and by test measurements of relative 3D coordinates that are performed with interferometrically recorded sinusoidal phase gratings on holographic plates. Finally, operation of a four-wavelength profilometric system with four spatially phase-shifted at π/2 sinusoidal phase gratings illuminated with four diode lasers at wavelengths 790 nm, 810 nm, 850 nm and 910 nm is simulated and the systematical error of the profilometric measurement is evaluated.

  5. A real-time 3D scanning system for pavement rutting and pothole detections

    NASA Astrophysics Data System (ADS)

    Li, Qingguang; Yao, Ming; Yao, Xun; Yu, Wurong; Xu, Bugao

    2009-08-01

    Rutting and pothole are the common pavement distress problems that need to be timely inspected and repaired to ensure ride quality and safe traffic. This paper introduces a real-time, automated inspection system devoted for detecting these distress features using high-speed transverse scanning. The detection principle is based on the dynamic generation and characterization of 3D pavement profiles obtained from structured light measurements. The system implementation mainly involves three tasks: multi-view coplanar calibration, sub-pixel laser stripe location, and pavement distress recognition. The multi-view coplanar scheme was employed in the calibration procedure to increase the feature points and to make the points distributed across the field of view of the camera, which greatly improves the calibration precision. The laser stripe locating method was implemented in four steps: median filtering, coarse edge detection, fine edge adjusting, stripe curve mending and interpolation by cubic splines. The pavement distress recognition algorithms include line segment approximation of the profile, searching for the feature points, and parameters calculations. The parameter data of a curve segment between two feature points, such as width, depth and length, were used to differentiate rutting, pothole, and pothole under different constraints. The preliminary experiment results show that the system is capable of locating these pavement distresses, and meets the needs for real-time and accurate pavement inspection.

  6. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-03-01

    Since the behaviour of proteins and biological molecules is tightly related to cell's environment, more and more microscopy techniques are moving from in vitro to in living cells experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution. Since protein dynamics inside a cell involve all three dimensions, we developed an automated routine for 3D tracking of single fluorescent molecules inside living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  7. The effect of object speed and direction on the performance of 3D speckle tracking using a 3D swept-volume ultrasound probe

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Symonds-Tayler, J. Richard N.; Evans, Philip M.

    2011-11-01

    Three-dimensional (3D) soft tissue tracking using 3D ultrasound is of interest for monitoring organ motion during therapy. Previously we demonstrated feature tracking of respiration-induced liver motion in vivo using a 3D swept-volume ultrasound probe. The aim of this study was to investigate how object speed affects the accuracy of tracking ultrasonic speckle in the absence of any structural information, which mimics the situation in homogenous tissue for motion in the azimuthal and elevational directions. For object motion prograde and retrograde to the sweep direction of the transducer, the spatial sampling frequency increases or decreases with object speed, respectively. We examined the effect object motion direction of the transducer on tracking accuracy. We imaged a homogenous ultrasound speckle phantom whilst moving the probe with linear motion at a speed of 0-35 mm s-1. Tracking accuracy and precision were investigated as a function of speed, depth and direction of motion for fixed displacements of 2 and 4 mm. For the azimuthal direction, accuracy was better than 0.1 and 0.15 mm for displacements of 2 and 4 mm, respectively. For a 2 mm displacement in the elevational direction, accuracy was better than 0.5 mm for most speeds. For 4 mm elevational displacement with retrograde motion, accuracy and precision reduced with speed and tracking failure was observed at speeds of greater than 14 mm s-1. Tracking failure was attributed to speckle de-correlation as a result of decreasing spatial sampling frequency with increasing speed of retrograde motion. For prograde motion, tracking failure was not observed. For inter-volume displacements greater than 2 mm, only prograde motion should be tracked which will decrease temporal resolution by a factor of 2. Tracking errors of the order of 0.5 mm for prograde motion in the elevational direction indicates that using the swept probe technology speckle tracking accuracy is currently too poor to track homogenous tissue over

  8. Longitudinal Measurement of Extracellular Matrix Rigidity in 3D Tumor Models Using Particle-tracking Microrheology

    PubMed Central

    El-Hamidi, Hamid; Celli, Jonathan P.

    2014-01-01

    The mechanical microenvironment has been shown to act as a crucial regulator of tumor growth behavior and signaling, which is itself remodeled and modified as part of a set of complex, two-way mechanosensitive interactions. While the development of biologically-relevant 3D tumor models have facilitated mechanistic studies on the impact of matrix rheology on tumor growth, the inverse problem of mapping changes in the mechanical environment induced by tumors remains challenging. Here, we describe the implementation of particle-tracking microrheology (PTM) in conjunction with 3D models of pancreatic cancer as part of a robust and viable approach for longitudinally monitoring physical changes in the tumor microenvironment, in situ. The methodology described here integrates a system of preparing in vitro 3D models embedded in a model extracellular matrix (ECM) scaffold of Type I collagen with fluorescently labeled probes uniformly distributed for position- and time-dependent microrheology measurements throughout the specimen. In vitro tumors are plated and probed in parallel conditions using multiwell imaging plates. Drawing on established methods, videos of tracer probe movements are transformed via the Generalized Stokes Einstein Relation (GSER) to report the complex frequency-dependent viscoelastic shear modulus, G*(ω). Because this approach is imaging-based, mechanical characterization is also mapped onto large transmitted-light spatial fields to simultaneously report qualitative changes in 3D tumor size and phenotype. Representative results showing contrasting mechanical response in sub-regions associated with localized invasion-induced matrix degradation as well as system calibration, validation data are presented. Undesirable outcomes from common experimental errors and troubleshooting of these issues are also presented. The 96-well 3D culture plating format implemented in this protocol is conducive to correlation of microrheology measurements with therapeutic

  9. Forward-looking infrared 3D target tracking via combination of particle filter and SIFT

    NASA Astrophysics Data System (ADS)

    Li, Xing; Cao, Zhiguo; Yan, Ruicheng; Li, Tuo

    2013-10-01

    Aiming at the problem of tracking 3D target in forward-looking infrared (FLIR) image, this paper proposes a high-accuracy robust tracking algorithm based on SIFT and particle filter. The main contribution of this paper is the proposal of a new method of estimating the affine transformation matrix parameters based on Monte Carlo methods of particle filter. At first, we extract SIFT features on infrared image, and calculate the initial affine transformation matrix with optimal candidate key points. Then we take affine transformation parameters as particles, and use SIR (Sequential Importance Resampling) particle filter to estimate the best position, thus implementing our algorithm. The experiments demonstrate that our algorithm proves to be robust with high accuracy.

  10. An automated tool for 3D tracking of single molecules in living cells

    NASA Astrophysics Data System (ADS)

    Gardini, L.; Capitanio, M.; Pavone, F. S.

    2015-07-01

    Recently, tremendous improvements have been achieved in the precision of localization of single fluorescent molecules, allowing localization and tracking of biomolecules at the nm level. Since the behaviour of proteins and biological molecules is tightly influenced by the cell's environment, a growing number of microscopy techniques are moving from in vitro to live cell experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution (ms order of magnitude). To satisfy these requirements we developed an automated routine that allow 3D tracking of single fluorescent molecules in living cells with nanometer accuracy, by exploiting the properties of the point-spread-function of out-of-focus Quantum Dots bound to the protein of interest.

  11. 3D Fluorescent and Reflective Imaging of Whole Stardust Tracks in Aerogel

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2011-11-07

    The NASA Stardust mission returned to earth in 2006 with the cometary collector having captured over 1,000 particles in an aerogel medium at a relative velocity of 6.1 km/s. Particles captured in aerogel were heated, disaggregated and dispersed along 'tracks' or cavities in aerogel, singular tracks representing a history of one capture event. It has been our focus to chemically and morphologically characterize whole tracks in 3-dimensions, utilizing solely non-destructive methods. To this end, we have used a variety of methods: 3D Laser Scanning Confocal Microscopy (LSCM), synchrotron X-ray fluorescence (SXRF), and synchrotron X-ray diffraction (SXRD). In the past months we have developed two new techniques to aid in data collection. (1) We have received a new confocal microscope which has enabled autofluorescent and spectral imaging of aerogel samples. (2) We have developed a stereo-SXRF technique to chemically identify large grains in SXRF maps in 3-space. The addition of both of these methods to our analytic abilities provides a greater understanding of the mechanisms and results of track formation.

  12. Quantifying the 3D Odorant Concentration Field Used by Actively Tracking Blue Crabs

    NASA Astrophysics Data System (ADS)

    Webster, D. R.; Dickman, B. D.; Jackson, J. L.; Weissburg, M. J.

    2007-11-01

    Blue crabs and other aquatic organisms locate food and mates by tracking turbulent odorant plumes. The odorant concentration fluctuates unpredictably due to turbulent transport, and many characteristics of the fluctuation pattern have been hypothesized as useful cues for orienting to the odorant source. To make a direct linkage between tracking behavior and the odorant concentration signal, we developed a measurement system based the laser induced fluorescence technique to quantify the instantaneous 3D concentration field surrounding actively tracking blue crabs. The data suggest a correlation between upstream walking speed and the concentration of the odorant signal arriving at the antennule chemosensors, which are located near the mouth region. More specifically, we note an increase in upstream walking speed when high concentration bursts arrive at the antennules location. We also test hypotheses regarding the ability of blue crabs to steer relative to the plume centerline based on the signal contrast between the chemosensors located on their leg appendages. These chemosensors are located much closer to the substrate compared to the antennules and are separated by the width of the blue crab. In this case, it appears that blue crabs use the bilateral signal comparison to track along the edge of the plume.

  13. Mapping dynamic mechanical remodeling in 3D tumor models via particle tracking microrheology

    NASA Astrophysics Data System (ADS)

    Jones, Dustin P.; Hanna, William; Celli, Jonathan P.

    2015-03-01

    Particle tracking microrheology (PTM) has recently been employed as a non-destructive way to longitudinally track physical changes in 3D pancreatic tumor co-culture models concomitant with tumor growth and invasion into the extracellular matrix (ECM). While the primary goal of PTM is to quantify local viscoelasticity via the Generalized Stokes-Einstein Relation (GSER), a more simplified way of describing local tissue mechanics lies in the tabulation and subsequent visualization of the spread of probe displacements in a given field of view. Proper analysis of this largely untapped byproduct of standard PTM has the potential to yield valuable insight into the structure and integrity of the ECM. Here, we use clustering algorithms in R to analyze the trajectories of probes in 3D pancreatic tumor/fibroblast co-culture models in an attempt to differentiate between probes that are effectively constrained by the ECM and/or contractile traction forces, and those that exhibit uninhibited mobility in local water-filled pores. We also discuss the potential pitfalls of this method. Accurately and reproducibly quantifying the boundary between these two categories of probe behavior could result in an effective method for measuring the average pore size in a given region of ECM. Such a tool could prove useful for studying stromal depletion, physical impedance to drug delivery, and degradation due to cellular invasion.

  14. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging.

    PubMed

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  15. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  16. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  17. Data acquisition electronics and reconstruction software for real time 3D track reconstruction within the MIMAC project

    NASA Astrophysics Data System (ADS)

    Bourrion, O.; Bosson, G.; Grignon, C.; Bouly, J. L.; Richer, J. P.; Guillaudin, O.; Mayet, F.; Billard, J.; Santos, D.

    2011-11-01

    Directional detection of non-baryonic Dark Matter requires 3D reconstruction of low energy nuclear recoils tracks. A gaseous micro-TPC matrix, filled with either 3He, CF4 or C4H10 has been developed within the MIMAC project. A dedicated acquisition electronics and a real time track reconstruction software have been developed to monitor a 512 channel prototype. This auto-triggered electronic uses embedded processing to reduce the data transfer to its useful part only, i.e. decoded coordinates of hit tracks and corresponding energy measurements. An acquisition software with on-line monitoring and 3D track reconstruction is also presented.

  18. Automatic 2D to 3D conversion implemented for real-time applications

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr; Ramos-Diaz, Eduardo; Gonzalez Huitron, Victor

    2014-05-01

    Different hardware implementations of designed automatic 2D to 3D video color conversion employing 2D video sequence are presented. The analyzed framework includes together processing of neighboring frames using the following blocks: CIELa*b* color space conversion, wavelet transform, edge detection using HF wavelet sub-bands (HF, LH and HH), color segmentation via k-means on a*b* color plane, up-sampling, disparity map (DM) estimation, adaptive postfiltering, and finally, the anaglyph 3D scene generation. During edge detection, the Donoho threshold is computed, then each sub-band is binarized according to a threshold chosen and finally the thresholding image is formed. DM estimation is performed in the following matter: in left stereo image (or frame), a window with varying sizes is used according to the information obtained from binarized sub-band image, distinguishing different texture areas into LL sub-band image. The stereo matching is performed between two (left and right) LL sub-band images using processing with different window sizes. Upsampling procedure is employed in order to obtain the enhanced DM. Adaptive post-processing procedure is based on median filter and k-means segmentation in a*b* color plane. The SSIM and QBP criteria are applied in order to compare the performance of the proposed framework against other disparity map computation techniques. The designed technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7 and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode.

  19. Real-time 3D Fourier-domain optical coherence tomography guided microvascular anastomosis

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Ibrahim, Zuhaib; Lee, W. P. A.; Brandacher, Gerald; Kang, Jin U.

    2013-03-01

    Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.

  20. Prediction of 3D internal organ position from skin surface motion: results from electromagnetic tracking studies

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth H.; Tang, Jonathan; Zhang, Hui J.; Varghese, Emmanuel; Cleary, Kevin R.

    2005-04-01

    An effective treatment method for organs that move with respiration (such as the lungs, pancreas, and liver) is a major goal of radiation medicine. In order to treat such tumors, we need (1) real-time knowledge of the current location of the tumor, and (2) the ability to adapt the radiation delivery system to follow this constantly changing location. In this study, we used electromagnetic tracking in a swine model to address the first challenge, and to determine if movement of a marker attached to the skin could accurately predict movement of an internal marker embedded in an organ. Under approved animal research protocols, an electromagnetically tracked needle was inserted into a swine liver and an electromagnetically tracked guidewire was taped to the abdominal skin of the animal. The Aurora (Northern Digital Inc., Waterloo, Canada) electromagnetic tracking system was then used to monitor the position of both of these sensors every 40 msec. Position readouts from the sensors were then tested to see if any of the movements showed correlation. The strongest correlations were observed between external anterior-posterior motion and internal inferior-superior motion, with many other axes exhibiting only weak correlation. We also used these data to build a predictive model of internal motion by taking segments from the data and using them to derive a general functional relationship between the internal needle and the external guidewire. For the axis with the strongest correlation, this model enabled us to predict internal organ motion to within 1 mm.

  1. Nondestructive 3D confocal laser imaging with deconvolution of seven whole stardust tracks with complementary XRF and quantitative analysis

    SciTech Connect

    Greenberg, M.; Ebel, D.S.

    2009-03-19

    We present a nondestructive 3D system for analysis of whole Stardust tracks, using a combination of Laser Confocal Scanning Microscopy and synchrotron XRF. 3D deconvolution is used for optical corrections, and results of quantitative analyses of several tracks are presented. The Stardust mission to comet Wild 2 trapped many cometary and ISM particles in aerogel, leaving behind 'tracks' of melted silica aerogel on both sides of the collector. Collected particles and their tracks range in size from submicron to millimeter scale. Interstellar dust collected on the obverse of the aerogel collector is thought to have an average track length of {approx}15 {micro}m. It has been our goal to perform a total non-destructive 3D textural and XRF chemical analysis on both types of tracks. To that end, we use a combination of Laser Confocal Scanning Microscopy (LCSM) and X Ray Florescence (XRF) spectrometry. Utilized properly, the combination of 3D optical data and chemical data provides total nondestructive characterization of full tracks, prior to flattening or other destructive analysis methods. Our LCSM techniques allow imaging at 0.075 {micro}m/pixel, without the use of oil-based lenses. A full textural analysis on track No.82 is presented here as well as analysis of 6 additional tracks contained within 3 keystones (No.128, No.129 and No.140). We present a method of removing the axial distortion inherent in LCSM images, by means of a computational 3D Deconvolution algorithm, and present some preliminary experiments with computed point spread functions. The combination of 3D LCSM data and XRF data provides invaluable information, while preserving the integrity of the samples for further analysis. It is imperative that these samples, the first extraterrestrial solids returned since the Apollo era, be fully mapped nondestructively in 3D, to preserve the maximum amount of information prior to other, destructive analysis.

  2. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    SciTech Connect

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S; Senneville, B Denis de

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  3. Application of 3D WebGIS and real-time technique in earthquake information publishing and visualization

    NASA Astrophysics Data System (ADS)

    Li, Boren; Wu, Jianping; Pan, Mao; Huang, Jing

    2015-06-01

    In hazard management, earthquake researchers have utilized GIS to ease the process of managing disasters. Researchers use WebGIS to assess hazards and seismic risk. Although they can provide a visual analysis platform based on GIS technology, they lack a general description in the extensibility of WebGIS for processing dynamic data, especially real-time data. In this paper, we propose a novel approach for real-time 3D visual earthquake information publishing model based on WebGIS and digital globe to improve the ability of processing real-time data in systems based on WebGIS. On the basis of the model, we implement a real-time 3D earthquake information publishing system—EqMap3D. The system can not only publish real-time earthquake information but also display these data and their background geoscience information in a 3D scene. It provides a powerful tool for display, analysis, and decision-making for researchers and administrators. It also facilitates better communication between researchers engaged in geosciences and the interested public.

  4. Real-time motion- and B0-correction for LASER-localized spiral-accelerated 3D-MRSI of the brain at 3T

    PubMed Central

    Bogner, Wolfgang; Hess, Aaron T; Gagoski, Borjan; Tisdall, M. Dylan; van der Kouwe, Andre J.W.; Trattnig, Siegfried; Rosen, Bruce; Andronesi, Ovidiu C

    2013-01-01

    The full potential of magnetic resonance spectroscopic imaging (MRSI) is often limited by localization artifacts, motion-related artifacts, scanner instabilities, and long measurement times. Localized adiabatic selective refocusing (LASER) provides accurate B1-insensitive spatial excitation even at high magnetic fields. Spiral encoding accelerates MRSI acquisition, and thus, enables 3D-coverage without compromising spatial resolution. Real-time position-and shim/frequency-tracking using MR navigators correct motion- and scanner instability-related artifacts. Each of these three advanced MRI techniques provides superior MRSI data compared to commonly used methods. In this work, we integrated in a single pulse sequence these three promising approaches. Real-time correction of motion, shim, and frequency-drifts using volumetric dual-contrast echo planar imaging-based navigators were implemented in an MRSI sequence that uses low-power gradient modulated short-echo time LASER localization and time efficient spiral readouts, in order to provide fast and robust 3D-MRSI in the human brain at 3T. The proposed sequence was demonstrated to be insensitive to motion- and scanner drift-related degradations of MRSI data in both phantoms and volunteers. Motion and scanner drift artifacts were eliminated and excellent spectral quality was recovered in the presence of strong movement. Our results confirm the expected benefits of combining a spiral 3D-LASER-MRSI sequence with real-time correction. The new sequence provides accurate, fast, and robust 3D metabolic imaging of the human brain at 3T. This will further facilitate the use of 3D-MRSI for neuroscience and clinical applications. PMID:24201013

  5. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability. PMID:25465067

  6. Segmentation and Tracking of Adherens Junctions in 3D for the Analysis of Epithelial Tissue Morphogenesis

    PubMed Central

    Cilla, Rodrigo; Mechery, Vinodh; Hernandez de Madrid, Beatriz; Del Signore, Steven; Dotu, Ivan; Hatini, Victor

    2015-01-01

    Epithelial morphogenesis generates the shape of tissues, organs and embryos and is fundamental for their proper function. It is a dynamic process that occurs at multiple spatial scales from macromolecular dynamics, to cell deformations, mitosis and apoptosis, to coordinated cell rearrangements that lead to global changes of tissue shape. Using time lapse imaging, it is possible to observe these events at a system level. However, to investigate morphogenetic events it is necessary to develop computational tools to extract quantitative information from the time lapse data. Toward this goal, we developed an image-based computational pipeline to preprocess, segment and track epithelial cells in 4D confocal microscopy data. The computational pipeline we developed, for the first time, detects the adherens junctions of epithelial cells in 3D, without the need to first detect cell nuclei. We accentuate and detect cell outlines in a series of steps, symbolically describe the cells and their connectivity, and employ this information to track the cells. We validated the performance of the pipeline for its ability to detect vertices and cell-cell contacts, track cells, and identify mitosis and apoptosis in surface epithelia of Drosophila imaginal discs. We demonstrate the utility of the pipeline to extract key quantitative features of cell behavior with which to elucidate the dynamics and biomechanical control of epithelial tissue morphogenesis. We have made our methods and data available as an open-source multiplatform software tool called TTT (http://github.com/morganrcu/TTT) PMID:25884654

  7. Theoretical assessment of a synthetic aperture beamformer for real-time 3-D imaging.

    PubMed

    Hazard, C R; Lockwood, G R

    1999-01-01

    A real-time 3-D imaging system requires the development of a beamformer that can generate many beams simultaneously. In this paper, we discuss and evaluate a suitable synthetic aperture beamformer. The proposed beamformer is based on a pipelined network of high speed digital signal processors (DSP). By using simple interpolation-based beamforming, only a few calculations per pixel are required for each channel, and an entire 2-D synthetic aperture image can be formed in the time of one transmit event. The performance of this beamformer was explored using a computer simulation of the radiation pattern. The simulations were done for a full 64-element array and a sparse array with the same receive aperture but only five transmit elements. We assessed the effects of changing the sampling rate and amplitude quantization by comparing the relative levels of secondary lobes in the radiation patterns. The results show that the proposed beamformer produces a radiation pattern equivalent to a conventional beamformer using baseband demodulation, provided that the sampling rate is approximately 10 times the center frequency of the transducer (34% bandwidth pulse). The simulations also show that the sparse array is not significantly more sensitive to delay or amplitude quantization than the full array. PMID:18238502

  8. Registration of Real-Time 3-D Ultrasound to Tomographic Images of the Abdominal Aorta.

    PubMed

    Brekken, Reidar; Iversen, Daniel Høyer; Tangen, Geir Arne; Dahl, Torbjørn

    2016-08-01

    The purpose of this study was to develop an image-based method for registration of real-time 3-D ultrasound to computed tomography (CT) of the abdominal aorta, targeting future use in ultrasound-guided endovascular intervention. We proposed a method in which a surface model of the aortic wall was segmented from CT, and the approximate initial location of this model relative to the ultrasound volume was manually indicated. The model was iteratively transformed to automatically optimize correspondence to the ultrasound data. Feasibility was studied using data from a silicon phantom and in vivo data from a volunteer with previously acquired CT. Through visual evaluation, the ultrasound and CT data were seen to correspond well after registration. Both aortic lumen and branching arteries were well aligned. The processing was done offline, and the registration took approximately 0.2 s per ultrasound volume. The results encourage further patient studies to investigate accuracy, robustness and clinical value of the approach. PMID:27156015

  9. Neural network techniques for invariant recognition and motion tracking of 3-D objects

    SciTech Connect

    Hwang, J.N.; Tseng, Y.H.

    1995-12-31

    Invariant recognition and motion tracking of 3-D objects under partial object viewing are difficult tasks. In this paper, we introduce a new neural network solution that is robust to noise corruption and partial viewing of objects. This method directly utilizes the acquired range data and requires no feature extraction. In the proposed approach, the object is first parametrically represented by a continuous distance transformation neural network (CDTNN) which is trained by the surface points of the exemplar object. When later presented with the surface points of an unknown object, this parametric representation allows the mismatch information to back-propagate through the CDTNN to gradually determine the best similarity transformation (translation and rotation) of the unknown object. The mismatch can be directly measured in the reconstructed representation domain between the model and the unknown object.

  10. A 3D Vector/Scalar Visualization and Particle Tracking Package

    Energy Science and Technology Software Center (ESTSC)

    1999-08-19

    BOILERMAKER is an interactive visualization system consisting of three components: a visualization component, a particle tracking component, and a communication layer. The software, to date, has been used primarily in the visualization of vector and scalar fields associated with computational fluid dynamics (CFD) models of flue gas flows in industrial boilers and incinerators. Users can interactively request and toggle static vector fields, dynamic streamlines, and flowing vector fields. In addition, the user can interactively placemore » injector nozzles on boiler walls and visualize massed, evaporating sprays emanating from them. Some characteristics of the spray can be adjusted from within the visualization environment including spray shape and particle size. Also included with this release is software that supports 3D menu capabilities, scrollbars, communication and navigation.« less

  11. Imaging SPR combined with stereoscopic 3D tracking to study barnacle cyprid-surface interactions

    NASA Astrophysics Data System (ADS)

    Maleshlijski, S.; Sendra, G. H.; Aldred, N.; Clare, A. S.; Liedberg, B.; Grunze, M.; Ederth, T.; Rosenhahn, A.

    2016-01-01

    Barnacle larvae (cyprids) explore surfaces to identify suitable settlement sites. This process is selective, and cyprids respond to numerous surface cues. To better understand the settlement process, it is desirable to simultaneously monitor both the surface exploration behavior and any close interactions with the surface. Stereoscopic 3D tracking of the cyprids provides quantitative access to surface exploration and pre-settlement rituals. Imaging surface plasmon resonance (SPR) reveals any interactions with the surfaces, such as surface inspection during bipedal walking and deposition of temporary adhesives. We report on a combination of both techniques to bring together information on swimming behavior in the vicinity of the interface and physical interactions of the cyprid with the surface. The technical requirements are described, and we applied the setup to cyprids of Balanus amphitrite. Initial data shows the applicability of the combined instrument to correlate exploration and touchdown events on surfaces with different chemical termination.

  12. A 3D Vector/Scalar Visualization and Particle Tracking Package

    SciTech Connect

    Freitag, Lori; Disz, Terry; Papka, Mike; Heath, Daniel; Diachin, Darin; Herzog, Jim; Ryan, and Bob

    1999-08-19

    BOILERMAKER is an interactive visualization system consisting of three components: a visualization component, a particle tracking component, and a communication layer. The software, to date, has been used primarily in the visualization of vector and scalar fields associated with computational fluid dynamics (CFD) models of flue gas flows in industrial boilers and incinerators. Users can interactively request and toggle static vector fields, dynamic streamlines, and flowing vector fields. In addition, the user can interactively place injector nozzles on boiler walls and visualize massed, evaporating sprays emanating from them. Some characteristics of the spray can be adjusted from within the visualization environment including spray shape and particle size. Also included with this release is software that supports 3D menu capabilities, scrollbars, communication and navigation.

  13. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  14. Application of 3D hydrodynamic and particle tracking models for better environmental management of finfish culture

    NASA Astrophysics Data System (ADS)

    Moreno Navas, Juan; Telfer, Trevor C.; Ross, Lindsay G.

    2011-04-01

    Hydrographic conditions, and particularly current speeds, have a strong influence on the management of fish cage culture. These hydrodynamic conditions can be used to predict particle movement within the water column and the results used to optimise environmental conditions for effective site selection, setting of environmental quality standards, waste dispersion, and potential disease transfer. To this end, a 3D hydrodynamic model, MOHID, has been coupled to a particle tracking model to study the effects of mean current speed, quiescent water periods and bulk water circulation in Mulroy Bay, Co. Donegal Ireland, an Irish fjard (shallow fjordic system) important to the aquaculture industry. A Lagangrian method simulated the instantaneous release of "particles" emulating discharge from finfish cages to show the behaviour of waste in terms of water circulation and water exchange. The 3D spatial models were used to identify areas of mixed and stratified water using a version of the Simpson-Hunter criteria, and to use this in conjunction with models of current flow for appropriate site selection for salmon aquaculture. The modelled outcomes for stratification were in good agreement with the direct measurements of water column stratification based on observed density profiles. Calculations of the Simpson-Hunter tidal parameter indicated that most of Mulroy Bay was potentially stratified with a well mixed region over the shallow channels where the water is faster flowing. The fjard was characterised by areas of both very low and high mean current speeds, with some areas having long periods of quiescent water. The residual current and the particle tracking animations created through the models revealed an anticlockwise eddy that may influence waste dispersion and potential for disease transfer, among salmon cages and which ensures that the retention time of waste substances from cages is extended. The hydrodynamic model results were incorporated into the ArcView TM GIS

  15. Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish

    PubMed Central

    Maaswinkel, Hans; Zhu, Liqun; Weng, Wei

    2013-01-01

    Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals. PMID:24336189

  16. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. PMID:23218511

  17. Eulerian and Lagrangian methods for vortex tracking in 2D and 3D flows

    NASA Astrophysics Data System (ADS)

    Huang, Yangzi; Green, Melissa

    2014-11-01

    Coherent structures are a key component of unsteady flows in shear layers. Improvement of experimental techniques has led to larger amounts of data and requires of automated procedures for vortex tracking. Many vortex criteria are Eulerian, and identify the structures by an instantaneous local swirling motion in the field, which are indicated by closed or spiral streamlines or pathlines in a reference frame. Alternatively, a Lagrangian Coherent Structures (LCS) analysis is a Lagrangian method based on the quantities calculated along fluid particle trajectories. In the current work, vortex detection is demonstrated on data from the simulation of two cases: a 2D flow with a flat plate undergoing a 45 ° pitch-up maneuver and a 3D wall-bounded turbulence channel flow. Vortices are visualized and tracked by their centers and boundaries using Γ1, the Q criterion, and LCS saddle points. In the cases of 2D flow, saddle points trace showed a rapid acceleration of the structure which indicates the shedding from the plate. For channel flow, saddle points trace shows that average structure convection speed exhibits a similar trend as a function of wall-normal distance as the mean velocity profile, and leads to statistical quantities of vortex dynamics. Dr. Jeff Eldredge and his research group at UCLA are gratefully acknowledged for sharing the database of simulation for the current research. This work was supported by the Air Force Office of Scientific Research under AFOSR Award No. FA9550-14-1-0210.

  18. a Cache Design Method for Spatial Information Visualization in 3d Real-Time Rendering Engine

    NASA Astrophysics Data System (ADS)

    Dai, X.; Xiong, H.; Zheng, X.

    2012-07-01

    A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be

  19. New fabrication techniques for ring-array transducers for real-time 3D intravascular ultrasound.

    PubMed

    Light, Edward D; Lieu, Victor; Smith, Stephen W

    2009-10-01

    We have previously described miniature 2D array transducers integrated into a Cook Medical, Inc. vena cava filter deployment device. While functional, the fabrication technique was very labor intensive and did not lend itself well to efficient fabrication of large numbers of devices. We developed two new fabrication methods that we believe can be used to efficiently manufacture these types of devices in greater than prototype numbers. One transducer consisted of 55 elements operating near 5 MHz. The interelement spacing is 0.20 mm. It was constructed on a flat piece of copper-clad polyimide and then wrapped around an 11 French catheter of a Cook Medical, Inc. inferior vena cava (IVC) filter deployment device. We used a braided wiring technology from Tyco Electronics Corp. to connect the elements to our real-time 3D ultrasound scanner. Typical measured transducer element bandwidth was 20% centered at 4.7 MHz and the 50 Omega round trip insertion loss was --82 dB. The mean of the nearest neighbor cross talk was -37.0 dB. The second method consisted of a 46-cm long single layer flex circuit from MicroConnex that terminates in an interconnect that plugs directly into our system cable. This transducer had 70 elements at 0.157 mm interelement spacing operating at 4.8 MHz. Typical measured transducer element bandwidth was 29% and the 50 Omega round trip insertion loss was -83 dB. The mean of the nearest neighbor cross talk was -33.0 dB. PMID:20458877

  20. 3-D Particle Tracking Velocimetry: Development and Applications in Small Scale Flows

    NASA Astrophysics Data System (ADS)

    Tien, Wei-Hsin

    The thesis contains two parts of studies. In part I, a novel volumetric velocimetry technique is developed to measure the 3-D flow field of small-scale flows. The technique utilizes a color-coded pinhole plate with multiple light sources aligned to each pinhole to achieve high particle image density and large measurable depth on a single lens microscope system. A color separation algorithm and an improved particle identification algorithm are developed to identify individual particle images from each pinhole view. Furthermore, a calibration-based technique based on epi-polar line search method is developed to reconstruct the spatial coordinates of the particle, and a new two-frame tracking particle-tracking algorithm is developed to calculate the velocity field. The system was setup to achieve a magnification of 2.69, resulting in an imaging volume of 3.35 x 2.5 x 1.5 mm3 and showed satisfactory measurement accuracy. The technique was then further miniaturized to achieve a magnification of 10, resulting in a imaging volume of 600 x 600 x 600 microm3. The system was applied to a backward-facing step flow to test its ability to reconstruct the unsteady flow field with two-frame tracking. Finally, this technique was applied to a steady streaming flow field in a microfluidic device used to trap particles. The results revealed the three-dimensional flow structure that has not been observed in previous studies, and provided insights to the design of a more efficient trapping device. In part II, an in-vitro study was carried out to investigate the flow around a prosthetic venous valve. Using 2-D PIV, the dynamics of the valve motion was captured and the velocity fields were measured to investigate the effect of the sinus pocket and the coupling effect of a pair of valves. The PIV and hemodynamic results showed that the sinus pocket around the valve functioned as a flow regulator to smooth the entrained velocity profile and suppress the jet width. For current prosthetic

  1. A smart homecage system with 3D tracking for long-term behavioral experiments.

    PubMed

    Byunghun Lee; Kiani, Mehdi; Ghovanloo, Maysam

    2014-01-01

    A wirelessly-powered homecage system, called the EnerCage-HC, that is equipped with multi-coil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface (GUI) is presented for long-term electrophysiology experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils (WWCs) with optimal geometries to form 3-and 4-coil power transmission links while operating at 13.56 MHz. Utilizing multi-coil links increases the power transfer efficiency (PTE) compared to conventional 2-coil links and also reduces the number of power amplifiers (PAs) to only one, which significantly reduces the system complexity, cost, and dissipated heat. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6 cm accuracy. An in vivo experiment was conducted on a freely behaving rat by continuously delivering 24 mW to the mobile unit for > 7 hours inside a standard homecage. PMID:25570379

  2. A Smart Homecage System with 3D Tracking for Long-Term Behavioral Experiments

    PubMed Central

    Lee, Byunghun; Kiani, Mehdi; Ghovanloo, Maysam

    2015-01-01

    A wirelessly-powered homecage system, called the EnerCage-HC, that is equipped with multi-coil wireless power transfer, closed-loop power control, optical behavioral tracking, and a graphic user interface (GUI) is presented for long-term electrophysiology experiments. The EnerCage-HC system can wirelessly power a mobile unit attached to a small animal subject and also track its behavior in real-time as it is housed inside a standard homecage. The EnerCage-HC system is equipped with one central and four overlapping slanted wire-wound coils (WWCs) with optimal geometries to form 3- and 4-coil power transmission links while operating at 13.56 MHz. Utilizing multi-coil links increases the power transfer efficiency (PTE) compared to conventional 2-coil links and also reduces the number of power amplifiers (PAs) to only one, which significantly reduces the system complexity, cost, and dissipated heat. A Microsoft Kinect installed 90 cm above the homecage localizes the animal position and orientation with 1.6 cm accuracy. An in vivo experiment was conducted on a freely behaving rat by continuously delivering 24 mW to the mobile unit for > 7 hours inside a standard homecage. PMID:25570379

  3. Programmable real-time applications with the 3D-Flow for input data rate systems of hundreds of MHz

    SciTech Connect

    Crosetto, D.

    1996-02-01

    The applicability of the 3D-Flow system to different experimental setups for real-time applications in the range of hundreds of nanoseconds is described. The results of the simulation of several real-time applications using the 3D-Flow demonstrate the advantages of a simple architecture that carries out operations in a balanced manner using regular connections and exceptionally few replicated components compared to conventional microprocessors. Diverse applications can be found that will benefit from this approach: High Energy Physics (HEP), which typically requires discerning patterns from thousands of accelerator particle collision signals up to 40 Mhz input data rate; Medical Imaging, that requires interactive tools for studying fast occurring biological processes; processing output from high-rate CCD cameras in commercial applications, such as quality control in manufacturing; data compression; speech and character recognition; automatic automobile guidance, and other applications. The 3D-Flow system was conceived for experiments at the Superconducting Super Collider (SSC). It was adopted by the Gamma Electron and Muon (GEM) experiment that was to be used for particle identification. The target of the 3D-Flow system was real-time pattern recognition at 100 million frames/sec.

  4. Real-Time Feature Tracking Using Homography

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel S.; Cheng, Yang; Ansar, Adnan I.; Trotz, David C.; Padgett, Curtis W.

    2010-01-01

    This software finds feature point correspondences in sequences of images. It is designed for feature matching in aerial imagery. Feature matching is a fundamental step in a number of important image processing operations: calibrating the cameras in a camera array, stabilizing images in aerial movies, geo-registration of images, and generating high-fidelity surface maps from aerial movies. The method uses a Shi-Tomasi corner detector and normalized cross-correlation. This process is likely to result in the production of some mismatches. The feature set is cleaned up using the assumption that there is a large planar patch visible in both images. At high altitude, this assumption is often reasonable. A mathematical transformation, called an homography, is developed that allows us to predict the position in image 2 of any point on the plane in image 1. Any feature pair that is inconsistent with the homography is thrown out. The output of the process is a set of feature pairs, and the homography. The algorithms in this innovation are well known, but the new implementation improves the process in several ways. It runs in real-time at 2 Hz on 64-megapixel imagery. The new Shi-Tomasi corner detector tries to produce the requested number of features by automatically adjusting the minimum distance between found features. The homography-finding code now uses an implementation of the RANSAC algorithm that adjusts the number of iterations automatically to achieve a pre-set probability of missing a set of inliers. The new interface allows the caller to pass in a set of predetermined points in one of the images. This allows the ability to track the same set of points through multiple frames.

  5. Performance of ultrasound based measurement of 3D displacement using a curvilinear probe for organ motion tracking

    NASA Astrophysics Data System (ADS)

    Harris, Emma J.; Miller, Naomi R.; Bamber, Jeffrey C.; Evans, Phillip M.; Symonds-Tayler, J. Richard N.

    2007-09-01

    Three-dimensional (3D) soft tissue tracking is of interest for monitoring organ motion during therapy. Our goal is to assess the tracking performance of a curvilinear 3D ultrasound probe in terms of the accuracy and precision of measured displacements. The first aim was to examine the depth dependence of the tracking performance. This is of interest because the spatial resolution varies with distance from the elevational focus and because the curvilinear geometry of the transducer causes the spatial sampling frequency to decrease with depth. Our second aim was to assess tracking performance as a function of the spatial sampling setting (low, medium or high sampling). These settings are incorporated onto 3D ultrasound machines to allow the user to control the trade-off between spatial sampling and temporal resolution. Volume images of a speckle-producing phantom were acquired before and after the probe had been moved by a known displacement (1, 2 or 8 mm). This allowed us to assess the optimum performance of the tracking algorithm, in the absence of motion. 3D speckle tracking was performed using 3D cross-correlation and sub-voxel displacements were estimated. The tracking performance was found to be best for axial displacements and poorest for elevational displacements. In general, the performance decreased with depth, although the nature of the depth dependence was complex. Under certain conditions, the tracking performance was sufficient to be useful for monitoring organ motion. For example, at the highest sampling setting, for a 2 mm displacement, good accuracy and precision (an error and standard deviation of <0.4 mm) were observed at all depths and for all directions of displacement. The trade-off between spatial sampling, temporal resolution and size of the field of view (FOV) is discussed.

  6. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  7. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    PubMed

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications. PMID:22356947

  8. Towards intraoperative monitoring of ablation using tracked 3D ultrasound elastography and internal palpation

    NASA Astrophysics Data System (ADS)

    Foroughi, Pezhman; Burgner, Jessica; Choti, Michael A.; Webster, Robert J., III; Hager, Gregory D.; Boctor, Emad M.

    2012-03-01

    B-mode ultrasound is widely used in liver ablation. However, the necrosis zone is typically not visible under b-mode ultrasound, since ablation does not necessarily change the acoustic properties of the tissue. In contrast, the change in tissue stiffness makes elastography ideal for monitoring ablation. Tissue palpation for elastography is typically applied at the imaging probe, by indenting it slightly into the tissue surface. However, in this paper we propose an alternate approach, where palpation is applied by a surgical instrument located inside the tissue. In our approach, the ablation needle is placed inside a steerable device called an active cannula and inserted into the tissue. A controlled motion is applied to the center of the ablation zone via the active cannula. Since the type and direction of motion is known, displacement can then be computed from two frames with the desired motion. The elastography results show the ablated region around the needle. While internal palpation provides excellent local contrast, freehand palpation from outside of the tissue via the transducer can provide a more global view of the region of the interest. For this purpose, we used a tracked 3D transducer to generate volumetric elastography images covering the ablated region. The tracking information is employed to improve the elastography results by selecting volume pairs suitable for elastography. This is an extension of our 2D frame selection technique which can cope with uncertainties associated with intra-operative elastography. In our experiments with phantom and ex-vivo tissue, we were able to generate high-quality images depicting the boundaries of the hard lesions.

  9. Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²

    NASA Astrophysics Data System (ADS)

    Goldenson, N. L.

    2014-12-01

    Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect

  10. Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events

    NASA Astrophysics Data System (ADS)

    Javidi, Bahram; Yeom, Seokwon; Moon, Inkyu; Daneshpanah, Mehdi

    2006-05-01

    In this paper, we present an overview of three-dimensional (3D) optical imaging techniques for real-time automated sensing, visualization, and recognition of dynamic biological microorganisms. Real time sensing and 3D reconstruction of the dynamic biological microscopic objects can be performed by single-exposure on-line (SEOL) digital holographic microscopy. A coherent 3D microscope-based interferometer is constructed to record digital holograms of dynamic micro biological events. Complex amplitude 3D images of the biological microorganisms are computationally reconstructed at different depths by digital signal processing. Bayesian segmentation algorithms are applied to identify regions of interest for further processing. A number of pattern recognition approaches are addressed to identify and recognize the microorganisms. One uses 3D morphology of the microorganisms by analyzing 3D geometrical shapes which is composed of magnitude and phase. Segmentation, feature extraction, graph matching, feature selection, and training and decision rules are used to recognize the biological microorganisms. In a different approach, 3D technique is used that are tolerant to the varying shapes of the non-rigid biological microorganisms. After segmentation, a number of sampling patches are arbitrarily extracted from the complex amplitudes of the reconstructed 3D biological microorganism. These patches are processed using a number of cost functions and statistical inference theory for the equality of means and equality of variances between the sampling segments. Also, we discuss the possibility of employing computational integral imaging for 3D sensing, visualization, and recognition of biological microorganisms illuminated under incoherent light. Experimental results with several biological microorganisms are presented to illustrate detection, segmentation, and identification of micro biological events.

  11. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  12. 3D Joint Speaker Position and Orientation Tracking with Particle Filters

    PubMed Central

    Segura, Carlos; Hernando, Javier

    2014-01-01

    This paper addresses the problem of three-dimensional speaker orientation estimation in a smart-room environment equipped with microphone arrays. A Bayesian approach is proposed to jointly track the location and orientation of an active speaker. The main motivation is that the knowledge of the speaker orientation may yield an increased localization performance and vice versa. Assuming that the sound produced by the speaker is originated from his mouth, the center of the head is deduced based on the estimated head orientation. Moreover, the elevation angle of the head of the speaker can be partly inferred from the fast vertical movements of the computed mouth location. In order to test the performance of the proposed algorithm, a new multimodal dataset has been recorded for this purpose, where the corresponding 3D orientation angles are acquired by an inertial measurement unit (IMU) provided by accelerometers, magnetometers and gyroscopes in the three-axes. The proposed joint algorithm outperforms a two-step approach in terms of localization and orientation angle precision assessing the superiority of the joint approach. PMID:24481230

  13. 3D joint speaker position and orientation tracking with particle filters.

    PubMed

    Segura, Carlos; Hernando, Javier

    2014-01-01

    This paper addresses the problem of three-dimensional speaker orientation estimation in a smart-room environment equipped with microphone arrays. A Bayesian approach is proposed to jointly track the location and orientation of an active speaker. The main motivation is that the knowledge of the speaker orientation may yield an increased localization performance and vice versa. Assuming that the sound produced by the speaker is originated from his mouth, the center of the head is deduced based on the estimated head orientation. Moreover, the elevation angle of the head of the speaker can be partly inferred from the fast vertical movements of the computed mouth location. In order to test the performance of the proposed algorithm, a new multimodal dataset has been recorded for this purpose, where the corresponding 3D orientation angles are acquired by an inertial measurement unit (IMU) provided by accelerometers, magnetometers and gyroscopes in the three-axes. The proposed joint algorithm outperforms a two-step approach in terms of localization and orientation angle precision assessing the superiority of the joint approach. PMID:24481230

  14. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography.

    PubMed

    Carrasco-Zevallos, Oscar M; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  15. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  16. Esophagogastric Junction pressure morphology: comparison between a station pull-through and real-time 3D-HRM representation

    PubMed Central

    Nicodème, Frédéric; Lin, Zhiyue; Pandolfino, John E.; Kahrilas, Peter J.

    2013-01-01

    BACKGROUND Esophagogastric junction (EGJ) competence is the fundamental defense against reflux making it of great clinical significance. However, characterizing EGJ competence with conventional manometric methodologies has been confounded by its anatomic and physiological complexity. Recent technological advances in miniaturization and electronics have led to the development of a novel device that may overcome these challenges. METHODS Nine volunteer subjects were studied with a novel 3D-HRM device providing 7.5 mm axial and 45° radial pressure resolution within the EGJ. Real-time measurements were made at rest and compared to simulations of a conventional pull-through made with the same device. Moreover, 3D-HRM recordings were analyzed to differentiate contributing pressure signals within the EGJ attributable to lower esophageal sphincter (LES), diaphragm, and vasculature. RESULTS 3D-HRM recordings suggested that sphincter length assessed by a pull-through method greatly exaggerated the estimate of LES length by failing to discriminate among circumferential contractile pressure and asymmetric extrinsic pressure signals attributable to diaphragmatic and vascular structures. Real-time 3D EGJ recordings found that the dominant constituents of EGJ pressure at rest were attributable to the diaphragm. CONCLUSIONS 3D-HRM permits real-time recording of EGJ pressure morphology facilitating analysis of the EGJ constituents responsible for its function as a reflux barrier making it a promising tool in the study of GERD pathophysiology. The enhanced axial and radial recording resolution of the device should facilitate further studies to explore perturbations in the physiological constituents of EGJ pressure in health and disease. PMID:23734788

  17. 3D real-time visualization of blood flow in cerebral aneurysms by light field particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart

    2016-04-01

    Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of

  18. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    SciTech Connect

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-07-15

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  19. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    PubMed Central

    2014-01-01

    Purpose: The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. Methods: To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. Results: In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Conclusion: Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs. PMID:25038809

  20. Learning Dictionaries of Sparse Codes of 3D Movements of Body Joints for Real-Time Human Activity Understanding

    PubMed Central

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications. PMID:25473850

  1. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    PubMed

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications. PMID:25473850

  2. Mapping 3D Strains with Ultrasound Speckle Tracking: Method Validation and Initial Results in Porcine Scleral Inflation.

    PubMed

    Cruz Perez, Benjamin; Pavlatos, Elias; Morris, Hugh J; Chen, Hong; Pan, Xueliang; Hart, Richard T; Liu, Jun

    2016-07-01

    This study aimed to develop and validate a high frequency ultrasound method for measuring distributive, 3D strains in the sclera during elevations of intraocular pressure. A 3D cross-correlation based speckle-tracking algorithm was implemented to compute the 3D displacement vector and strain tensor at each tracking point. Simulated ultrasound radiofrequency data from a sclera-like structure at undeformed and deformed states with known strains were used to evaluate the accuracy and signal-to-noise ratio (SNR) of strain estimation. An experimental high frequency ultrasound (55 MHz) system was built to acquire 3D scans of porcine eyes inflated from 15 to 17 and then 19 mmHg. Simulations confirmed good strain estimation accuracy and SNR (e.g., the axial strains had less than 4.5% error with SNRs greater than 16.5 for strains from 0.005 to 0.05). Experimental data in porcine eyes showed increasing tensile, compressive, and shear strains in the posterior sclera during inflation, with a volume ratio close to one suggesting near-incompressibility. This study established the feasibility of using high frequency ultrasound speckle tracking for measuring 3D tissue strains and its potential to characterize physiological deformations in the posterior eye. PMID:26563101

  3. A novel 3D micron-scale DPTV (Defocused Particle Tracking Velocimetry) and its applications in microfluidic devices

    NASA Astrophysics Data System (ADS)

    Roberts, John

    2005-11-01

    The rapid advancements in micro/nano biotechnology demand quantitative tools for characterizing microfluidic flows in lab-on-a-chip applications, validation of computational results for fully 3D flows in complex micro-devices, and efficient observation of cellular dynamics in 3D. We present a novel 3D micron-scale DPTV (defocused particle tracking velocimetry) that is capable of mapping out 3D Lagrangian, as well as 3D Eulerian velocity flow fields at sub-micron resolution and with one camera. The main part of the imaging system is an epi-fluorescent microscope (Olympus IX 51), and the seeding particles are fluorescent particles with diameter range 300nm - 10um. A software package has been developed for identifying (x,y,z,t) coordinates of the particles using the defocused images. Using the imaging system, we successfully mapped the pressure driven flow fields in microfluidic channels. In particular, we measured the Laglangian flow fields in a microfluidic channel with a herring bone pattern at the bottom, the later is used to enhance fluid mixing in lateral directions. The 3D particle tracks revealed the flow structure that has only been seen in numerical computation. This work is supported by the National Science Foundation (CTS - 0514443), the Nanobiotechnology Center at Cornell, and The New York State Center for Life Science Enterprise.

  4. Tracking of cracks in bridges using GPR: a 3D approach

    NASA Astrophysics Data System (ADS)

    Benedetto, A.

    2012-04-01

    Corrosion associated with reinforcing bars is the most significant contributor to bridge deficiencies. The corrosion is usually caused by moisture and chloride ion exposure. In particular, corrosion products FeO, Fe2O3, Fe3O4 and other oxides along reinforcement bars. The reinforcing bars are attacked by corrosion and yield expansive corrosion products. These oxidation products occupy a larger volume than the original intact steel and internal expansive stresses lead to cracking and debonding. There are some conventional inspection methods for detection of reinforcing bar corrosion but they can be invasive and destructive, often laborious, lane closures is required and it is difficult or unreliable any quantification of corrosion. For these reasons, bridge engineers are always more preferring to use the Ground Penetrating Radar (GPR) technique. In this work a novel numerical approach for three dimensional tracking and mapping of cracks in the bridge is proposed. The work starts from some interesting results based on the use of the 3D imaging technique in order to improve the potentiality of GPR to detect voids, cracks or buried object. The numerical approach has been tested on data acquired on some bridges using a pulse GPR system specifically designed for bridge deck and pavement inspection that is called RIS Hi Bright. The equipment integrates two arrays of Ultra Wide Band ground coupled antennas, having a main working frequency of 2 GHz. The two arrays within the RIS Hi Bright are using antennas arranged with different polarization. One array includes sensors with parallel polarization with respect to the scanning direction (VV array), the other has sensors in orthogonal polarization (HH array). Overall the system collects 16 profiles within a single scan (8 HH + 8 VV). The cracks, associated often to moisture increasing and higher values of the dielectric constant, produce a not negligible increasing of the signal amplitude. Following this, the algorithm

  5. Real-Time Tracking of Knee Adduction Moment in Patients with Knee Osteoarthritis

    PubMed Central

    Kang, Sang Hoon; Lee, Song Joo; Zhang, Li-Qun

    2014-01-01

    Background The external knee adduction moment (EKAM) is closely associated with the presence, progression, and severity of knee osteoarthritis (OA). However, there is a lack of convenient and practical method to estimate and track in real-time the EKAM of patients with knee OA for clinical evaluation and gait training, especially outside of gait laboratories. New Method A real-time EKAM estimation method was developed and applied to track and investigate the EKAM and other knee moments during stepping on an elliptical trainer in both healthy subjects and a patient with knee OA. Results Substantial changes were observed in the EKAM and other knee moments during stepping in the patient with knee OA. Comparison with Existing Method(s) This is the first study to develop and test feasibility of real-time tracking method of the EKAM on patients with knee OA using 3-D inverse dynamics. Conclusions The study provides us an accurate and practical method to evaluate in real-time the critical EKAM associated with knee OA, which is expected to help us to diagnose and evaluate patients with knee OA and provide the patients with real-time EKAM feedback rehabilitation training. PMID:24361759

  6. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  7. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system.

    PubMed

    Tao, Tianyang; Chen, Qian; Da, Jian; Feng, Shijie; Hu, Yan; Zuo, Chao

    2016-09-01

    In recent years, fringe projection has become an established and essential method for dynamic three-dimensional (3-D) shape measurement in different fields such as online inspection and real-time quality control. Numerous high-speed 3-D shape measurement methods have been developed by either employing high-speed hardware, minimizing the number of pattern projection, or both. However, dynamic 3-D shape measurement of arbitrarily-shaped objects with full sensor resolution without the necessity of additional pattern projections is still a big challenge. In this work, we introduce a high-speed 3-D shape measurement technique based on composite phase-shifting fringes and a multi-view system. The geometry constraint is adopted to search the corresponding points independently without additional images. Meanwhile, by analysing the 3-D position and the main wrapped phase of the corresponding point, pairs with an incorrect 3-D position or a considerable phase difference are effectively rejected. All of the qualified corresponding points are then corrected, and the unique one as well as the related period order is selected through the embedded triangular wave. Finally, considering that some points can only be captured by one of the cameras due to the occlusions, these points may have different fringe orders in the two views, so a left-right consistency check is employed to eliminate those erroneous period orders in this case. Several experiments on both static and dynamic scenes are performed, verifying that our method can achieve a speed of 120 frames per second (fps) with 25-period fringe patterns for fast, dense, and accurate 3-D measurement. PMID:27607632

  8. Rapid, High-Throughput Tracking of Bacterial Motility in 3D via Phase-Contrast Holographic Video Microscopy

    PubMed Central

    Cheong, Fook Chiong; Wong, Chui Ching; Gao, YunFeng; Nai, Mui Hoon; Cui, Yidan; Park, Sungsu; Kenney, Linda J.; Lim, Chwee Teck

    2015-01-01

    Tracking fast-swimming bacteria in three dimensions can be extremely challenging with current optical techniques and a microscopic approach that can rapidly acquire volumetric information is required. Here, we introduce phase-contrast holographic video microscopy as a solution for the simultaneous tracking of multiple fast moving cells in three dimensions. This technique uses interference patterns formed between the scattered and the incident field to infer the three-dimensional (3D) position and size of bacteria. Using this optical approach, motility dynamics of multiple bacteria in three dimensions, such as speed and turn angles, can be obtained within minutes. We demonstrated the feasibility of this method by effectively tracking multiple bacteria species, including Escherichia coli, Agrobacterium tumefaciens, and Pseudomonas aeruginosa. In addition, we combined our fast 3D imaging technique with a microfluidic device to present an example of a drug/chemical assay to study effects on bacterial motility. PMID:25762336

  9. Real-time, high-accuracy 3D imaging and shape measurement.

    PubMed

    Nguyen, Hieu; Nguyen, Dung; Wang, Zhaoyang; Kieu, Hien; Le, Minh

    2015-01-01

    In spite of the recent advances in 3D shape measurement and geometry reconstruction, simultaneously achieving fast-speed and high-accuracy performance remains a big challenge in practice. In this paper, a 3D imaging and shape measurement system is presented to tackle such a challenge. The fringe-projection-profilometry-based system employs a number of advanced approaches, such as: composition of phase-shifted fringe patterns, externally triggered synchronization of system components, generalized system setup, ultrafast phase-unwrapping algorithm, flexible system calibration method, robust gamma correction scheme, multithread computation and processing, and graphics-processing-unit-based image display. Experiments have shown that the proposed system can acquire and display high-quality 3D reconstructed images and/or video stream at a speed of 45 frames per second with relative accuracy of 0.04% or at a reduced speed of 22.5 frames per second with enhanced accuracy of 0.01%. The 3D imaging and shape measurement system shows great promise of satisfying the ever-increasing demands of scientific and engineering applications. PMID:25967028

  10. Touring Mars Online, Real-time, in 3D for Math and Science Educators and Students

    ERIC Educational Resources Information Center

    Jones, Greg; Kalinowski, Kevin

    2007-01-01

    This article discusses a project that placed over 97% of Mars' topography made available from NASA into an interactive 3D multi-user online learning environment beginning in 2003. In 2005 curriculum materials that were created to support middle school math and science education were developed. Research conducted at the University of North Texas…

  11. Real-time 3D image reconstruction guidance in liver resection surgery

    PubMed Central

    Nicolau, Stephane; Pessaux, Patrick; Mutter, Didier; Marescaux, Jacques

    2014-01-01

    Background Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. Methods From a patient’s medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon’s intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. Results From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Conclusions Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this

  12. Using 3D Glyph Visualization to Explore Real-time Seismic Data on Immersive and High-resolution Display Systems

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Lindquist, K.; Kilb, D.; Newman, R.; Vernon, F.; Leigh, J.; Johnson, A.; Renambot, L.

    2003-12-01

    The study of time-dependent, three-dimensional natural phenomena like earthquakes can be enhanced with innovative and pertinent 3D computer graphics. Here we display seismic data as 3D glyphs (graphics primitives or symbols with various geometric and color attributes), allowing us to visualize the measured, time-dependent, 3D wave field from an earthquake recorded by a certain seismic network. In addition to providing a powerful state-of-health diagnostic of the seismic network, the graphical result presents an intuitive understanding of the real-time wave field that is hard to achieve with traditional 2D visualization methods. We have named these 3D icons `seismoglyphs' to suggest visual objects built from three components of ground motion data (north-south, east-west, vertical) recorded by a seismic sensor. A seismoglyph changes color with time, spanning the spectrum, to indicate when the seismic amplitude is largest. The spatial extent of the glyph indicates the polarization of the wave field as it arrives at the recording station. We compose seismoglyphs using the real time ANZA broadband data (http://www.eqinfo.ucsd.edu) to understand the 3D behavior of a seismic wave field in Southern California. Fifteen seismoglyphs are drawn simultaneously with a 3D topography map of Southern California, as real time data is piped into the graphics software using the Antelope system. At each station location, the seismoglyph evolves with time and this graphical display allows a scientist to observe patterns and anomalies in the data. The display also provides visual clues to indicate wave arrivals and ~real-time earthquake detection. Future work will involve adding phase detections, network triggers and near real-time 2D surface shaking estimates. The visuals can be displayed in an immersive environment using the passive stereoscopic Geowall (http://www.geowall.org). The stereographic projection allows for a better understanding of attenuation due to distance and earth

  13. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  14. Hybrid MV-kV 3D respiratory motion tracking during radiation therapy with low imaging dose

    NASA Astrophysics Data System (ADS)

    Yan, Huagang; Li, Haiyun; Liu, Zhixiang; Nath, Ravinder; Liu, Wu

    2012-12-01

    A novel real-time adaptive MV-kV imaging framework for image-guided radiation therapy is developed to reduce the thoracic and abdominal tumor targeting uncertainty caused by respiration-induced intrafraction motion with ultra-low patient imaging dose. In our method, continuous stereoscopic MV-kV imaging is used at the beginning of a radiation therapy delivery for several seconds to measure the implanted marker positions. After this stereoscopic imaging period, the kV imager is switched off except for the times when no fiducial marker is detected in the cine-MV images. The 3D time-varying marker positions are estimated by combining the MV 2D projection data and the motion correlations between directional components of marker motion established from the stereoscopic imaging period and updated afterwards; in particular, the most likely position is assumed to be the position on the projection line that has the shortest distance to the first principal component line segment constructed from previous trajectory points. An adaptive windowed auto-regressive prediction is utilized to predict the marker position a short time later (310 ms and 460 ms in this study) to allow for tracking system latency. To demonstrate the feasibility and evaluate the accuracy of the proposed method, computer simulations were performed for both arc and fixed-gantry deliveries using 66 h of retrospective tumor motion data from 42 patients treated for thoracic or abdominal cancers. The simulations reveal that using our hybrid approach, a smaller than 1.2 mm or 1.5 mm root-mean-square tracking error can be achieved at a system latency of 310 ms or 460 ms, respectively. Because the kV imaging is only used for a short period of time in our method, extra patient imaging dose can be reduced by an order of magnitude compared to continuous MV-kV imaging, while the clinical tumor targeting accuracy for thoracic or abdominal cancers is maintained. Furthermore, no additional hardware is required with the

  15. Continuous focus tracking for real-time optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Cobb, Michael J.; Liu, Xiumei; Li, Xingde

    2005-07-01

    We report an approach to achieving continuous focus tracking and a depth-independent transverse resolution for real-time optical coherence tomography (OCT) imaging. Continuous real-time focus tracking is permitted by use of a lateral-priority image acquisition sequence in which the depth-scanning rate is equivalent to the imaging frame rate. Real-time OCT imaging with continuous focus tracking is performed at 1 frame/s by reciprocal translation of a rapid lateral-scanning miniature imaging probe (e.g., an endoscope). The optical path length in the reference arm is scanned synchronously to ensure that the coherence gate coincides with the imaging beam focus. The image quality improvement is experimentally demonstrated by imaging a tissue phantom embedded with polystyrene microspheres and rabbit esophageal tissues.

  16. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  17. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    SciTech Connect

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  18. Model-based lasso catheter tracking in monoplane fluoroscopy for 3D breathing motion compensation during EP procedures

    NASA Astrophysics Data System (ADS)

    Liao, Rui

    2010-02-01

    Radio-frequency catheter ablation (RFCA) of the pulmonary veins (PVs) attached to the left atrium (LA) is usually carried out under fluoroscopy guidance. Overlay of detailed anatomical structures via 3-D CT and/or MR volumes onto the fluoroscopy helps visualization and navigation in electrophysiology procedures (EP). Unfortunately, respiratory motion may impair the utility of static overlay of the volume with fluoroscopy for catheter navigation. In this paper, we propose a B-spline based method for tracking the circumferential catheter (lasso catheter) in monoplane fluoroscopy. The tracked motion can be used for the estimation of the 3-D trajectory of breathing motion and for subsequent motion compensation. A lasso catheter is typically used during EP procedures and is pushed against the ostia of the PVs to be ablated. Hence this method does not require additional instruments, and achieves motion estimation right at the site of ablation. The performance of the proposed tracking algorithm was evaluated on 340 monoplane frames with an average error of 0.68 +/- 0.36 mms. Our contributions in this work are twofold. First and foremost, we show how to design an effective, practical, and workflow-friendly 3-D motion compensation scheme for EP procedures in a monoplane setup. In addition, we develop an efficient and accurate method for model-based tracking of the circumferential lasso catheter in the low-dose EP fluoroscopy.

  19. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging.

    PubMed

    Tseng, Hubert; Gage, Jacob A; Shen, Tsaiwei; Haisler, William L; Neeley, Shane K; Shiao, Sue; Chen, Jianbo; Desai, Pujan K; Liao, Angela; Hebel, Chris; Raphael, Robert M; Becker, Jeanne L; Souza, Glauco R

    2015-01-01

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5'-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (-control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z' = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments. PMID:26365200

  20. Low-cost real-time 3D PC distributed-interactive-simulation (DIS) application for C4I

    NASA Astrophysics Data System (ADS)

    Gonthier, David L.; Veron, Harry

    1998-04-01

    A 3D Distributed Interactive Simulation (DIS) application was developed and demonstrated in a PC environment. The application is capable of running in the stealth mode or as a player which includes battlefield simulations, such as ModSAF. PCs can be clustered together, but not necessarily collocated, to run a simulation or training exercise on their own. A 3D perspective view of the battlefield is displayed that includes terrain, trees, buildings and other objects supported by the DIS application. Screen update rates of 15 to 20 frames per second have been achieved with fully lit and textured scenes thus providing high quality and fast graphics. A complete PC system can be configured for under $2,500. The software runs under Windows95 and WindowsNT. It is written in C++ and uses a commercial API called RenderWare for 3D rendering. The software uses Microsoft Foundation classes and Microsoft DirectPlay for joystick input. The RenderWare libraries enhance the performance through optimization for MMX and the Pentium Pro processor. The RenderWare and the Righteous 3D graphics board from Orchid Technologies with an advertised rendering rate of up to 2 million texture mapped triangles per second. A low-cost PC DIS simulator that can partake in a real-time collaborative simulation with other platforms is thus achieved.

  1. A spheroid toxicity assay using magnetic 3D bioprinting and real-time mobile device-based imaging

    PubMed Central

    Tseng, Hubert; Gage, Jacob A.; Shen, Tsaiwei; Haisler, William L.; Neeley, Shane K.; Shiao, Sue; Chen, Jianbo; Desai, Pujan K.; Liao, Angela; Hebel, Chris; Raphael, Robert M.; Becker, Jeanne L.; Souza, Glauco R.

    2015-01-01

    An ongoing challenge in biomedical research is the search for simple, yet robust assays using 3D cell cultures for toxicity screening. This study addresses that challenge with a novel spheroid assay, wherein spheroids, formed by magnetic 3D bioprinting, contract immediately as cells rearrange and compact the spheroid in relation to viability and cytoskeletal organization. Thus, spheroid size can be used as a simple metric for toxicity. The goal of this study was to validate spheroid contraction as a cytotoxic endpoint using 3T3 fibroblasts in response to 5 toxic compounds (all-trans retinoic acid, dexamethasone, doxorubicin, 5′-fluorouracil, forskolin), sodium dodecyl sulfate (+control), and penicillin-G (−control). Real-time imaging was performed with a mobile device to increase throughput and efficiency. All compounds but penicillin-G significantly slowed contraction in a dose-dependent manner (Z’ = 0.88). Cells in 3D were more resistant to toxicity than cells in 2D, whose toxicity was measured by the MTT assay. Fluorescent staining and gene expression profiling of spheroids confirmed these findings. The results of this study validate spheroid contraction within this assay as an easy, biologically relevant endpoint for high-throughput compound screening in representative 3D environments. PMID:26365200

  2. Real-time forecasting of Hong Kong beach water quality by 3D deterministic model.

    PubMed

    Chan, S N; Thoe, W; Lee, J H W

    2013-03-15

    Bacterial level (e.g. Escherichia coli) is generally adopted as the key indicator of beach water quality due to its high correlation with swimming associated illnesses. A 3D deterministic hydrodynamic model is developed to provide daily water quality forecasting for eight marine beaches in Tsuen Wan, which are only about 8 km from the Harbour Area Treatment Scheme (HATS) outfall discharging 1.4 million m(3)/d of partially-treated sewage. The fate and transport of the HATS effluent and its impact on the E. coli level at nearby beaches are studied. The model features the seamless coupling of near field jet mixing and the far field transport and dispersion of wastewater discharge from submarine outfalls, and a spatial-temporal dependent E. coli decay rate formulation specifically developed for sub-tropical Hong Kong waters. The model prediction of beach water quality has been extensively validated against field data both before and after disinfection of the HATS effluent. Compared with daily beach E. coli data during August-November 2011, the model achieves an overall accuracy of 81-91% in forecasting compliance/exceedance of beach water quality standard. The 3D deterministic model has been most valuable in the interpretation of the complex variation of beach water quality which depends on tidal level, solar radiation and other hydro-meteorological factors. The model can also be used in optimization of disinfection dosage and in emergency response situations. PMID:23337883

  3. Handheld portable real-time tracking and communications device

    DOEpatents

    Wiseman, James M.; Riblett, Jr., Loren E.; Green, Karl L.; Hunter, John A.; Cook, III, Robert N.; Stevens, James R.

    2012-05-22

    Portable handheld real-time tracking and communications devices include; a controller module, communications module including global positioning and mesh network radio module, data transfer and storage module, and a user interface module enclosed in a water-resistant enclosure. Real-time tracking and communications devices can be used by protective force, security and first responder personnel to provide situational awareness allowing for enhance coordination and effectiveness in rapid response situations. Such devices communicate to other authorized devices via mobile ad-hoc wireless networks, and do not require fixed infrastructure for their operation.

  4. Real-time optical holographic tracking of multiple objects

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1989-01-01

    A coherent optical correlation technique for real-time simultaneous tracking of several different objects making independent movements is described, and experimental results are presented. An evaluation of this system compared with digital computing systems is made. The real-time processing capability is obtained through the use of a liquid crystal television spatial light modulator and a dichromated gelatin multifocus hololens. A coded reference beam is utilized in the separation of the output correlation plane associated with each input target so that independent tracking can be achieved.

  5. Real-time seam tracking for rocket thrust chamber manufacturing

    SciTech Connect

    Schmitt, D.J.; Novak, J.L.; Starr, G.P.; Maslakowski, J.E.

    1993-11-01

    A sensor-based control approach for real-time seam tracking of rocket thrust chamber assemblies has been developed to enable automation of a braze paste dispensing process. This approach utilizes a non-contact Multi-Axis Seam Tracking (MAST) sensor to track the seams. Thee MAST sensor measures capacitance variations between the sensor and the workpiece and produces four varying voltages which are read directly into the robot controller. A PID control algorithm which runs at the application program level has been designed based upon a simple dynamic model of the combined robot and sensor plant. The control algorithm acts on the incoming sensor signals in real-time to guide the robot motion along the seam path. Experiments demonstrate that seams can be tracked at 100 mm/sec within the accuracy required for braze paste dispensing.

  6. Real-time tracking of moving objects by optical correlation.

    PubMed

    Gara, A D

    1979-01-15

    A low-contrast diffusely scattering object was identified and tracked in real-time by coherent optical correlation. The coherent input image is generated with a liquid crystal incoherent-to-coherent image transducer. A cast iron connecting rod (the test object) was tracked with an accuracy of 1 part in 130 over a 0.6-m distance while traveling at speeds up to 0.25 m/sec. PMID:20208682

  7. Registration of clinical volumes to beams-eye-view images for real-time tracking

    SciTech Connect

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Berbeco, Ross I.; Keall, Paul J.

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.

  8. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models. PMID:26439826

  9. Capturing reading patterns through a real-time smart camera iris tracking system

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Ortlieb, Evan; McLauchlan, Lifford; Pham, Linh M.

    2012-06-01

    A real-time iris detection and tracking algorithm has been implemented on a smart camera using LabVIEW graphical programming tools. The program detects the eye and finds the center of the iris, which is recorded and stored in Cartesian coordinates. In subsequent video frames, the location of the center of the iris corresponding to the previously detected eye is computed and recorded for a desired period of time, creating a list of coordinates representing the moving iris center location across image frames. We present an application for the developed smart camera iris tracking system that involves the assessment of reading patterns. The purpose of the study is to identify differences in reading patterns of readers at various levels to eventually determine successful reading strategies for improvement. The readers are positioned in front of a computer screen with a fixed camera directed at the reader's eyes. The readers are then asked to read preselected content on the computer screen, one comprising a traditional newspaper text and one a Web page. The iris path is captured and stored in real-time. The reading patterns are examined by analyzing the path of the iris movement. In this paper, the iris tracking system and algorithms, application of the system to real-time capture of reading patterns, and representation of 2D/3D iris track are presented with results and recommendations.

  10. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  11. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  12. Shape measurement by a multi-view methodology based on the remote tracking of a 3D optical scanner

    NASA Astrophysics Data System (ADS)

    Barone, Sandro; Paoli, Alessandro; Viviano Razionale, Armando

    2012-03-01

    Full field optical techniques can be reliably used for 3D measurements of complex shapes by multi-view processes, which require the computation of transformation parameters relating different views into a common reference system. Although, several multi-view approaches have been proposed, the alignment process is still the crucial step of a shape reconstruction. In this paper, a methodology to automatically align 3D views has been developed by integrating a stereo vision system and a full field optical scanner. In particular, the stereo vision system is used to remotely track the optical scanner within a working volume. The tracking system uses stereo images to detect the 3D coordinates of retro-reflective infrared markers rigidly connected to the scanner. Stereo correspondences are established by a robust methodology based on combining the epipolar geometry with an image spatial transformation constraint. The proposed methodology has been validated by experimental tests regarding both the evaluation of the measurement accuracy and the 3D reconstruction of an industrial shape.

  13. MRI - 3D Ultrasound - X-ray Image Fusion with Electromagnetic Tracking for Transendocardial Therapeutic Injections: In-vitro Validation and In-vivo Feasibility

    PubMed Central

    Hatt, Charles R.; Jain, Ameet K.; Parthasarathy, Vijay; Lang, Andrew; Raval, Amish N.

    2014-01-01

    Myocardial infarction (MI) is one of the leading causes of death in the world. Small animal studies have shown that stem-cell therapy offers dramatic functional improvement post-MI. An endomyocardial catheter injection approach to therapeutic agent delivery has been proposed to improve efficacy through increased cell retention. Accurate targeting is critical for reaching areas of greatest therapeutic potential while avoiding a life-threatening myocardial perforation. Multimodal image fusion has been proposed as a way to improve these procedures by augmenting traditional intra-operative imaging modalities with high resolution pre-procedural images. Previous approaches have suffered from a lack of real-time tissue imaging and dependence on X-ray imaging to track devices, leading to increased ionizing radiation dose. In this paper, we present a new image fusion system for catheter-based targeted delivery of therapeutic agents. The system registers real-time 3D echocardiography, magnetic resonance, X-ray, and electromagnetic sensor tracking within a single flexible framework. All system calibrations and registrations were validated and found to have target registration errors less than 5 mm in the worst case. Injection accuracy was validated in a motion enabled cardiac injection phantom, where targeting accuracy ranged from 0.57 to 3.81 mm. Clinical feasibility was demonstrated with in-vivo swine experiments, where injections were successfully made into targeted regions of the heart. PMID:23561056

  14. 3D tracking and phase-contrast imaging by twin-beams digital holographic microscope in microfluidics

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Finizio, A.; Paturzo, M.; Merola, F.; Grilli, S.; Ferraro, P.

    2012-06-01

    A compact twin-beam interferometer that can be adopted as a flexible diagnostic tool in microfluidic platforms is presented. The devise has two functionalities, as explained in the follow, and can be easily integrated in microfluidic chip. The configuration allows 3D tracking of micro-particles and, at same time, furnishes Quantitative Phase-Contrast maps of tracked micro-objects by interference microscopy. Experimental demonstration of its effectiveness and compatibility with biological field is given on for in vitro cells in microfluidic environment. Nowadays, several microfluidic configuration exist and many of them are commercially available, their development is due to the possibility for manipulating droplets, handling micro and nano-objects, visualize and quantify processes occurring in small volumes and, clearly, for direct applications on lab-on-a chip devices. In microfluidic research field, optical/photonics approaches are the more suitable ones because they have various advantages as to be non-contact, full-field, non-invasive and can be packaged thanks to the development of integrable optics. Moreover, phase contrast approaches, adapted to a lab-on-a-chip configurations, give the possibility to get quantitative information with remarkable lateral and vertical resolution directly in situ without the need to dye and/or kill cells. Furthermore, numerical techniques for tracking of micro-objects needs to be developed for measuring velocity fields, trajectories patterns, motility of cancer cell and so on. Here, we present a compact holographic microscope that can ensure, by the same configuration and simultaneously, accurate 3D tracking and quantitative phase-contrast analysis. The system, simple and solid, is based on twin laser beams coming from a single laser source. Through a easy conceptual design, we show how these two different functionalities can be accomplished by the same optical setup. The working principle, the optical setup and the mathematical

  15. A new 3D tracking method for cell mechanics investigation exploiting the capabilities of digital holography in microscopy

    NASA Astrophysics Data System (ADS)

    Miccio, L.; Memmolo, P.; Merola, F.; Fusco, S.; Netti, P. A.; Ferraro, P.

    2014-03-01

    A method for 3D tracking has been developed exploiting Digital Holography features in Microscopy (DHM). In the framework of self-consistent platform for manipulation and measurement of biological specimen we use DHM for quantitative and completely label free analysis of samples with low amplitude contrast. Tracking capability extend the potentiality of DHM allowing to monitor the motion of appropriate probes and correlate it with sample properties. Complete 3D tracking has been obtained for the probes avoiding the amplitude refocusing in traditional tracking processes. Moreover, in biology and biomedical research fields one of the main topic is the understanding of morphology and mechanics of cells and microorganisms. Biological samples present low amplitude contrast that limits the information that can be retrieved through optical bright-field microscope measurements. The main effect on light propagating in such objects is in phase. This is known as phase-retardation or phase-shift. DHM is an innovative and alternative approach in microscopy, it's a good candidate for no-invasive and complete specimen analysis because its main characteristic is the possibility to discern between intensity and phase information performing quantitative mapping of the Optical Path Length. In this paper, the flexibility of DH is employed to analyze cell mechanics of unstained cells subjected to appropriate stimuli. DHM is used to measure all the parameters useful to understand the deformations induced by external and controlled stresses on in-vitro cells. Our configuration allows 3D tracking of micro-particles and, simultaneously, furnish quantitative phase-contrast maps. Experimental results are presented and discussed for in vitro cells.

  16. Microscopic type of real-time uniaxial 3D profilometry by polarization camera

    NASA Astrophysics Data System (ADS)

    Shibata, Shuhei; Kobayashi, Fumio; Barada, Daisuke; Otani, Yukitoshi

    2014-07-01

    This paper introduces a novel polarization structured light pattern projector was done by taking into account the unique characteristic of the pixelated camera and a spatial light modulator (SLM) used. Height variations of reflective samples are retrieved by using fringe contrast modulation on an uniaxial configuration. By placing a special retardance pattern on the SLM, the pixelated camera will detect a phase shifted sinusoidal pattern where later its contrast change will be used to retrieve the height information of the sample under study. The presented system takes into account the defocus change obtained by the height variation of the sample by encoding the information on the fringe contrast of the projected structured light pattern by the SLM. The final purpose of this work is to present a single shot 3D profilometry system based in fringe contrast analysis. Experimental results of a moving glass slide are presented.

  17. Feasibility of modulation-encoded TOBE CMUTS for real-time 3-D imaging.

    PubMed

    Chee, Ryan K W; Zemp, Roger J

    2015-04-01

    Modulation-encoded top orthogonal to bottom electrode (TOBE) capacitive micromachined ultrasound transducers (CMUTs) are proposed 2-D ultrasound transducer arrays that could allow 3-D images to be acquired in a single acquisition using only N channels for an N × N array. In the proposed modulation-encoding scheme, columns are not only biased, but also modulated with a different frequency for each column. The modulation frequencies are higher than the passband of the CMUT membranes and mix nonlinearly in CMUT cells with acoustic signals to produce acoustic signal sidebands around the modulation carriers in the frequency domain. Thus, signals from elements along a row may be read out simultaneously via frequency-domain multiplexing. We present the theory and feasibility data behind modulation-encoded TOBE CMUTs. We also present experiments showing necessary modifications to the current TOBE design that would allow for crosstalk-mitigated modulation-encoding. PMID:25881354

  18. Real-time 3D vectorcardiography: an application for didactic use

    NASA Astrophysics Data System (ADS)

    Daniel, G.; Lissa, G.; Medina Redondo, D.; Vásquez, L.; Zapata, D.

    2007-11-01

    The traditional approach to teach the physiological basis of electrocardiography, based only on textbooks, turns out to be insufficient or confusing for students of biomedical sciences. The addition of laboratory practice to the curriculum enables students to approach theoretical aspects from a hands-on experience, resulting in a more efficient and deeper knowledge of the phenomena of interest. Here, we present the development of a PC-based application meant to facilitate the understanding of cardiac bioelectrical phenomena by visualizing in real time the instantaneous 3D cardiac vector. The system uses 8 standard leads from a 12-channel electrocardiograph. The application interface has pedagogic objectives, and facilitates the observation of cardiac depolarization and repolarization and its temporal relationship with the ECG, making it simpler to interpret.

  19. Close to real-time robust pedestrian detection and tracking

    NASA Astrophysics Data System (ADS)

    Lipetski, Y.; Loibner, G.; Sidla, O.

    2015-03-01

    Fully automated video based pedestrian detection and tracking is a challenging task with many practical and important applications. We present our work aimed to allow robust and simultaneously close to real-time tracking of pedestrians. The presented approach is stable to occlusions, lighting conditions and is generalized to be applied on arbitrary video data. The core tracking approach is built upon tracking-by-detections principle. We describe our cascaded HOG detector with successive CNN verification in detail. For the tracking and re-identification task, we did an extensive analysis of appearance based features as well as their combinations. The tracker was tested on many hours of video data for different scenarios; the results are presented and discussed.

  20. Real-time Awake Animal Motion Tracking System for SPECT Imaging

    SciTech Connect

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon; Weisenberger, A G; Stolin, A; McKisson, J; Smith, M F

    2008-01-01

    Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments the markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.

  1. Digital In-Line Holography System for 3D-3C Particle Tracking Velocimetry

    NASA Astrophysics Data System (ADS)

    Malek, Mokrane; Lebrun, Denis; Allano, Daniel

    Digital in-line holography is a suitable method for measuring three dimensional (3D) velocity fields. Such a system records directly on a charge-coupled device (CCD) camera a couple of diffraction patterns produced by small particles illuminated by a modulated laser diode. The numerical reconstruction is based on the wavelet transformation method. A 3D particle field is reconstructed by computing the wavelet components for different scale parameters. The scale parameter is directly related to the axial distance between a given particle and the CCD camera. The particle images are identified and localized by analyzing the maximum of the wavelet transform modulus (WTMM) and the equivalent diameter of the particle image (Deq). Afterwards, a 3D point-matching (PM) algorithm is applied to the pair of sets containing the 3D particle locations. In the PM algorithm, the displacement of the particles is modeled by an affine transformation. This affine transformation is based on the use of the dual number quaternions. Afterwards, the velocity-field extraction is performed. This system is tested with simulated particle field displacements and the feasibility is checked with an experimental displacement.

  2. 3D positional tracking of ellipsoidal particles in a microtube flow using holographic microscopy

    NASA Astrophysics Data System (ADS)

    Byeon, Hyeok Jun; Seo, Kyung Won; Lee, Sang Joon

    2014-11-01

    Understanding of micro-scale flow phenomena is getting large attention under advances in micro-scale measurement technologies. Especially, the dynamics of particles suspended in a fluid is essential in both scientific and industrial fields. Moreover, most particles handled in research and industrial fields have non-spherical shapes rather than a simple spherical shape. Under various flow conditions, these non-spherical particles exhibit unique dynamic behaviors. To analyze these dynamic behaviors in a fluid flow, 3D positional information of the particles should be measured accurately. In this study, digital holographic microscopy (DHM) is employed to measure the 3D positional information of non-spherical particles, which are fabricated by stretching spherical polystyrene particles. 3D motions of those particles are obtained by interpreting the holograms captured from particles. Ellipsoidal particles with known size and shape are observed to verify the performance of the DHM technique. In addition, 3D positions of particles in a microtube flow are traced. This DHM technique exhibits promising potential in the analysis of dynamic behaviors of non-spherical particles suspended in micro-scale fluid flows.

  3. Rapid 3D Track Reconstruction with the BaBar Trigger Upgrade

    SciTech Connect

    Bailey, S

    2004-05-24

    A new hardware trigger system based on tracks detected by a stereo drift chamber has been developed for the BABAR experiment at the Stanford Linear Accelerator Center. The z{sub 0} p{sub T} Discriminator (ZPD) is capable of fast, 3-dimensional reconstruction of charged particle tracks and provides rejection of background events due to beam particles interacting with the beam pipe at the first-level trigger. Over 1 gigabyte of data is processed per second by each ZPD module. Rapid track reconstruction has been realized using Xilinx Virtex-II FPGAs.

  4. Real-time 3D imaging of Haines jumps in porous media flow

    PubMed Central

    Berg, Steffen; Ott, Holger; Klapp, Stephan A.; Schwing, Alex; Neiteler, Rob; Brussee, Niels; Makurat, Axel; Leu, Leon; Enzmann, Frieder; Schwarz, Jens-Oliver; Kersten, Michael; Irvine, Sarah; Stampanoni, Marco

    2013-01-01

    Newly developed high-speed, synchrotron-based X-ray computed microtomography enabled us to directly image pore-scale displacement events in porous rock in real time. Common approaches to modeling macroscopic fluid behavior are phenomenological, have many shortcomings, and lack consistent links to elementary pore-scale displacement processes, such as Haines jumps and snap-off. Unlike the common singular pore jump paradigm based on observations of restricted artificial capillaries, we found that Haines jumps typically cascade through 10–20 geometrically defined pores per event, accounting for 64% of the energy dissipation. Real-time imaging provided a more detailed fundamental understanding of the elementary processes in porous media, such as hysteresis, snap-off, and nonwetting phase entrapment, and it opens the way for a rigorous process for upscaling based on thermodynamic models. PMID:23431151

  5. The Complete (3-D) Co-Seismic Displacements Using Point-Like Targets Tracking With Ascending And Descending SAR Data

    NASA Astrophysics Data System (ADS)

    Hu, Xie; Wang, Teng; Liao, Mingsheng

    2013-12-01

    SAR Interferometry (InSAR) has its unique advantages, e.g., all weather/time accessibility, cm-level accuracy and large spatial coverage, however, it can only obtain one dimensional measurement along line-of-sight (LOS) direction. Offset tracking is an important complement to measure large and rapid displacements in both azimuth and range directions. Here we perform offset tracking on detected point-like targets (PT) by calculating the cross-correlation with a sinc-like template. And a complete 3-D displacement field can be derived using PT offset tracking from a pair of ascending and descending data. The presented case study on 2010 M7.2 El Mayor-Cucapah earthquake helps us better understand the rupture details.

  6. Validation of 3D motion tracking of pulmonary lesions using CT fluoroscopy images for robotically assisted lung biopsy

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Fichtinger, Gabor; Taylor, Russell H.; Cleary, Kevin R.

    2005-04-01

    As recently proposed in our previous work, the two-dimensional CT fluoroscopy image series can be used to track the three-dimensional motion of a pulmonary lesion. The assumption is that the lung tissue is locally rigid, so that the real-time CT fluoroscopy image can be combined with a preoperative CT volume to infer the position of the lesion when the lesion is not in the CT fluoroscopy imaging plane. In this paper, we validate the basic properties of our tracking algorithm using a synthetic four-dimensional lung dataset. The motion tracking result is compared to the ground truth of the four-dimensional dataset. The optimal parameter configurations of the algorithm are discussed. The robustness and accuracy of the tracking algorithm are presented. The error analysis shows that the local rigidity error is the principle component of the tracking error. The error increases as the lesion moves away from the image region being registered. Using the synthetic four-dimensional lung data, the average tracking error over a complete respiratory cycle is 0.8 mm for target lesions inside the lung. As a result, the motion tracking algorithm can potentially alleviate the effect of respiratory motion in CT fluoroscopy-guided lung biopsy.

  7. 3D Markov Process for Traffic Flow Prediction in Real-Time.

    PubMed

    Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi

    2016-01-01

    Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025

  8. 3D Markov Process for Traffic Flow Prediction in Real-Time

    PubMed Central

    Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi

    2016-01-01

    Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025

  9. Real-time sensing of mouth 3-D position and orientation

    NASA Astrophysics Data System (ADS)

    Burdea, Grigore C.; Dunn, Stanley M.; Mallik, Matsumita; Jun, Heesung

    1990-07-01

    A key problem in using digital subtraction radiography in dentistry is the ability to reposition the X-ray source and patient so as to reproduce an identical imaging geometry. In this paper we describe an approach to solving this problem based on real time sensing of the 3-D position and orientation of the patient's mouth. The research described here is part of a program which has a long term goal to develop an automated digital subtraction radiography system. This will allow the patient and X-ray source to be accurately repositioned without the mechanical fixtures that are presently used to preserve the imaging geometry. If we can measure the position and orientation of the mouth, then the desired position of the source can be computed as the product of the transformation matrices describing the desired imaging geometry and the position vector of the targeted tooth. Position and orientation of the mouth is measured by a real time sensing device using low-frequency magnetic field technology. We first present the problem of repositioning the patient and source and then outline our analytic solution. Then we describe an experimental setup to measure the accuracy, reproducibility and resolution of the sensor and present results of preliminary experiments.

  10. Real-Time Estimation of 3-D Needle Shape and Deflection for MRI-Guided Interventions

    PubMed Central

    Park, Yong-Lae; Elayaperumal, Santhi; Daniel, Bruce; Ryu, Seok Chang; Shin, Mihye; Savall, Joan; Black, Richard J.; Moslehi, Behzad; Cutkosky, Mark R.

    2015-01-01

    We describe a MRI-compatible biopsy needle instrumented with optical fiber Bragg gratings for measuring bending deflections of the needle as it is inserted into tissues. During procedures, such as diagnostic biopsies and localized treatments, it is useful to track any tool deviation from the planned trajectory to minimize positioning errors and procedural complications. The goal is to display tool deflections in real time, with greater bandwidth and accuracy than when viewing the tool in MR images. A standard 18 ga × 15 cm inner needle is prepared using a fixture, and 350-μm-deep grooves are created along its length. Optical fibers are embedded in the grooves. Two sets of sensors, located at different points along the needle, provide an estimate of the bent profile, as well as temperature compensation. Tests of the needle in a water bath showed that it produced no adverse imaging artifacts when used with the MR scanner. PMID:26405428

  11. Accuracy of Real-time Couch Tracking During 3-dimensional Conformal Radiation Therapy, Intensity Modulated Radiation Therapy, and Volumetric Modulated Arc Therapy for Prostate Cancer

    SciTech Connect

    Wilbert, Juergen; Baier, Kurt; Hermann, Christian; Flentje, Michael; Guckenberger, Matthias

    2013-01-01

    Purpose: To evaluate the accuracy of real-time couch tracking for prostate cancer. Methods and Materials: Intrafractional motion trajectories of 15 prostate cancer patients were the basis for this phantom study; prostate motion had been monitored with the Calypso System. An industrial robot moved a phantom along these trajectories, motion was detected via an infrared camera system, and the robotic HexaPOD couch was used for real-time counter-steering. Residual phantom motion during real-time tracking was measured with the infrared camera system. Film dosimetry was performed during delivery of 3-dimensional conformal radiation therapy (3D-CRT), step-and-shoot intensity modulated radiation therapy (IMRT), and volumetric modulated arc therapy (VMAT). Results: Motion of the prostate was largest in the anterior-posterior direction, with systematic ( N-Ary-Summation ) and random ({sigma}) errors of 2.3 mm and 2.9 mm, respectively; the prostate was outside a threshold of 5 mm (3D vector) for 25.0%{+-}19.8% of treatment time. Real-time tracking reduced prostate motion to N-Ary-Summation =0.01 mm and {sigma} = 0.55 mm in the anterior-posterior direction; the prostate remained within a 1-mm and 5-mm threshold for 93.9%{+-}4.6% and 99.7%{+-}0.4% of the time, respectively. Without real-time tracking, pass rates based on a {gamma} index of 2%/2 mm in film dosimetry ranged between 66% and 72% for 3D-CRT, IMRT, and VMAT, on average. Real-time tracking increased pass rates to minimum 98% on average for 3D-CRT, IMRT, and VMAT. Conclusions: Real-time couch tracking resulted in submillimeter accuracy for prostate cancer, which transferred into high dosimetric accuracy independently of whether 3D-CRT, IMRT, or VMAT was used.

  12. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  13. Real-time 3-D SAFT-UT system evaluation and validation

    SciTech Connect

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E.

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors.

  14. Targets For Three-Dimensional (3-D) Tracking Of Human Impact Test Subjects

    NASA Astrophysics Data System (ADS)

    Muzzy, William H.; Prell, Arthur M.

    1982-02-01

    Lightweight targets mounted on the head and neck of human volunteers are photographed by high-speed cameras during impact acceleration tests. The targets must be capable of being tracked through a wide angular motion by at least two cameras to obtain three-dimens-ional displacement and orientation. Because the targets are tracked and digitized by a computerized photodigitizer, their pattern must be selected to maximize recognition and minimize crossover confusion. This pater discusses the target construction, orientation on the accelerometer mount, pattern selection, and paint scheme.

  15. Incorporation of 3-D Scanning Lidar Data into Google Earth for Real-time Air Pollution Observation

    NASA Astrophysics Data System (ADS)

    Chiang, C.; Nee, J.; Das, S.; Sun, S.; Hsu, Y.; Chiang, H.; Chen, S.; Lin, P.; Chu, J.; Su, C.; Lee, W.; Su, L.; Chen, C.

    2011-12-01

    3-D Differential Absorption Scanning Lidar (DIASL) system has been designed with small size, light weight, and suitable for installation in various vehicles and places for monitoring of air pollutants and displays a detailed real-time temporal and spatial variability of trace gases via the Google Earth. The fast scanning techniques and visual information can rapidly identify the locations and sources of the polluted gases and assess the most affected areas. It is helpful for Environmental Protection Agency (EPA) to protect the people's health and abate the air pollution as quickly as possible. The distributions of the atmospheric pollutants and their relationship with local metrological parameters measured with ground based instruments will also be discussed. Details will be presented in the upcoming symposium.

  16. A simple fuzzy logic real-time camera tracking system

    NASA Technical Reports Server (NTRS)

    Magee, Kevin N.; Cheatham, John B., Jr.

    1993-01-01

    A fuzzy logic control of camera pan and tilt has been implemented to provide real-time camera tracking of a moving object. The user clicks a mouse button to identify the object that is to be tracked. A rapid centroid estimation algorithm is used to estimate the location of the moving object, and based on simple fuzzy membership functions, fuzzy x and y values are input into a six-rule fuzzy logic rule base. The output of this system is de-fuzzified to provide pan and tilt velocities required to keep the image of the object approximately centered in the camera field of view.

  17. Combined kV and MV imaging for real-time tracking of implanted fiducial markers

    SciTech Connect

    Wiersma, R. D.; Mao Weihua; Xing, L.

    2008-04-15

    In the presence of intrafraction organ motion, target localization uncertainty can greatly hamper the advantage of highly conformal dose techniques such as intensity modulated radiation therapy (IMRT). To minimize the adverse dosimetric effect caused by tumor motion, a real-time knowledge of the tumor position is required throughout the beam delivery process. The recent integration of onboard kV diagnostic imaging together with MV electronic portal imaging devices on linear accelerators can allow for real-time three-dimensional (3D) tumor position monitoring during a treatment delivery. The aim of this study is to demonstrate a near real-time 3D internal fiducial tracking system based on the combined use of kV and MV imaging. A commercially available radiotherapy system equipped with both kV and MV imaging systems was used in this work. A hardware video frame grabber was used to capture both kV and MV video streams simultaneously through independent video channels at 30 frames per second. The fiducial locations were extracted from the kV and MV images using a software tool. The geometric tracking capabilities of the system were evaluated using a pelvic phantom with embedded fiducials placed on a moveable stage. The maximum tracking speed of the kV/MV system is approximately 9 Hz, which is primarily limited by the frame rate of the MV imager. The geometric accuracy of the system is found to be on the order of less than 1 mm in all three spatial dimensions. The technique requires minimal hardware modification and is potentially useful for image-guided radiation therapy systems.

  18. A maximum likelihood approach to diffeomorphic speckle tracking for 3D strain estimation in echocardiography.

    PubMed

    Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago

    2015-08-01

    The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most

  19. Dynamic shape modeling of the mitral valve from real-time 3D ultrasound images using continuous medial representation

    NASA Astrophysics Data System (ADS)

    Pouch, Alison M.; Yushkevich, Paul A.; Jackson, Benjamin M.; Gorman, Joseph H., III; Gorman, Robert C.; Sehgal, Chandra M.

    2012-03-01

    Purpose: Patient-specific shape analysis of the mitral valve from real-time 3D ultrasound (rt-3DUS) has broad application to the assessment and surgical treatment of mitral valve disease. Our goal is to demonstrate that continuous medial representation (cm-rep) is an accurate valve shape representation that can be used for statistical shape modeling over the cardiac cycle from rt-3DUS images. Methods: Transesophageal rt-3DUS data acquired from 15 subjects with a range of mitral valve pathology were analyzed. User-initialized segmentation with level sets and symmetric diffeomorphic normalization delineated the mitral leaflets at each time point in the rt-3DUS data series. A deformable cm-rep was fitted to each segmented image of the mitral leaflets in the time series, producing a 4D parametric representation of valve shape in a single cardiac cycle. Model fitting accuracy was evaluated by the Dice overlap, and shape interpolation and principal component analysis (PCA) of 4D valve shape were performed. Results: Of the 289 3D images analyzed, the average Dice overlap between each fitted cm-rep and its target segmentation was 0.880+/-0.018 (max=0.912, min=0.819). The results of PCA represented variability in valve morphology and localized leaflet thickness across subjects. Conclusion: Deformable medial modeling accurately captures valve geometry in rt-3DUS images over the entire cardiac cycle and enables statistical shape analysis of the mitral valve.

  20. SIMULTANEOUS BILATERAL REAL-TIME 3-D TRANSCRANIAL ULTRASOUND IMAGING AT 1 MHZ THROUGH POOR ACOUSTIC WINDOWS

    PubMed Central

    Lindsey, Brooks D.; Nicoletto, Heather A.; Bennett, Ellen R.; Laskowitz, Daniel T.; Smith, Stephen W.

    2013-01-01

    Ultrasound imaging has been proposed as a rapid, portable alternative imaging modality to examine stroke patients in pre-hospital or emergency room settings. However, in performing transcranial ultrasound examinations, 8%–29% of patients in a general population may present with window failure, in which case it is not possible to acquire clinically useful sonographic information through the temporal bone acoustic window. In this work, we describe the technical considerations, design and fabrication of low-frequency (1.2 MHz), large aperture (25.3 mm) sparse matrix array transducers for 3-D imaging in the event of window failure. These transducers are integrated into a system for real-time 3-D bilateral transcranial imaging—the ultrasound brain helmet—and color flow imaging capabilities at 1.2 MHz are directly compared with arrays operating at 1.8 MHz in a flow phantom with attenuation comparable to the in vivo case. Contrast-enhanced imaging allowed visualization of arteries of the Circle of Willis in 5 of 5 subjects and 8 of 10 sides of the head despite probe placement outside of the acoustic window. Results suggest that this type of transducer may allow acquisition of useful images either in individuals with poor windows or outside of the temporal acoustic window in the field. PMID:23415287

  1. Laetoli's lost tracks: 3D generated mean shape and missing footprints.

    PubMed

    Bennett, M R; Reynolds, S C; Morse, S A; Budka, M

    2016-01-01

    The Laetoli site (Tanzania) contains the oldest known hominin footprints, and their interpretation remains open to debate, despite over 35 years of research. The two hominin trackways present are parallel to one another, one of which is a composite formed by at least two individuals walking in single file. Most researchers have focused on the single, clearly discernible G1 trackway while the G2/3 trackway has been largely dismissed due to its composite nature. Here we report the use of a new technique that allows us to decouple the G2 and G3 tracks for the first time. In so doing we are able to quantify the mean footprint topology of the G3 trackway and render it useable for subsequent data analyses. By restoring the effectively 'lost' G3 track, we have doubled the available data on some of the rarest traces directly associated with our Pliocene ancestors. PMID:26902912

  2. Laetoli’s lost tracks: 3D generated mean shape and missing footprints

    PubMed Central

    Bennett, M. R.; Reynolds, S. C.; Morse, S. A.; Budka, M.

    2016-01-01

    The Laetoli site (Tanzania) contains the oldest known hominin footprints, and their interpretation remains open to debate, despite over 35 years of research. The two hominin trackways present are parallel to one another, one of which is a composite formed by at least two individuals walking in single file. Most researchers have focused on the single, clearly discernible G1 trackway while the G2/3 trackway has been largely dismissed due to its composite nature. Here we report the use of a new technique that allows us to decouple the G2 and G3 tracks for the first time. In so doing we are able to quantify the mean footprint topology of the G3 trackway and render it useable for subsequent data analyses. By restoring the effectively ‘lost’ G3 track, we have doubled the available data on some of the rarest traces directly associated with our Pliocene ancestors. PMID:26902912

  3. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  4. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  5. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGESBeta

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  6. Bi-planar 2D-to-3D registration in Fourier domain for stereoscopic x-ray motion tracking

    NASA Astrophysics Data System (ADS)

    Zosso, Dominique; Le Callennec, Benoît; Bach Cuadra, Meritxell; Aminian, Kamiar; Jolles, Brigitte M.; Thiran, Jean-Philippe

    2008-03-01

    In this paper we present a new method to track bone movements in stereoscopic X-ray image series of the knee joint. The method is based on two different X-ray image sets: a rotational series of acquisitions of the still subject knee that allows the tomographic reconstruction of the three-dimensional volume (model), and a stereoscopic image series of orthogonal projections as the subject performs movements. Tracking the movements of bones throughout the stereoscopic image series means to determine, for each frame, the best pose of every moving element (bone) previously identified in the 3D reconstructed model. The quality of a pose is reflected in the similarity between its theoretical projections and the actual radiographs. We use direct Fourier reconstruction to approximate the three-dimensional volume of the knee joint. Then, to avoid the expensive computation of digitally rendered radiographs (DRR) for pose recovery, we develop a corollary to the 3-dimensional central-slice theorem and reformulate the tracking problem in the Fourier domain. Under the hypothesis of parallel X-ray beams, the heavy 2D-to-3D registration of projections in the signal domain is replaced by efficient slice-to-volume registration in the Fourier domain. Focusing on rotational movements, the translation-relevant phase information can be discarded and we only consider scalar Fourier amplitudes. The core of our motion tracking algorithm can be implemented as a classical frame-wise slice-to-volume registration task. Results on both synthetic and real images confirm the validity of our approach.

  7. Pulmonary CT image registration and warping for tracking tissue deformation during the respiratory cycle through 3D consistent image registration

    PubMed Central

    Li, Baojun; Christensen, Gary E.; Hoffman, Eric A.; McLennan, Geoffrey; Reinhardt, Joseph M.

    2008-01-01

    Tracking lung tissues during the respiratory cycle has been a challenging task for diagnostic CT and CT-guided radiotherapy. We propose an intensity- and landmark-based image registration algorithm to perform image registration and warping of 3D pulmonary CT image data sets, based on consistency constraints and matching corresponding airway branchpoints. In this paper, we demonstrate the effectivenss and accuracy of this algorithm in tracking lung tissues by both animal and human data sets. In the animal study, the result showed a tracking accuracy of 1.9 mm between 50% functional residual capacity (FRC) and 85% total lung capacity (TLC) for 12 metal seeds implanted in the lungs of a breathing sheep under precise volume control using a pulmonary ventilator. Visual inspection of the human subject results revealed the algorithm’s potential not only in matching the global shapes, but also in registering the internal structures (e.g., oblique lobe fissures, pulmonary artery branches, etc.). These results suggest that our algorithm has significant potential for warping and tracking lung tissue deformation with applications in diagnostic CT, CT-guided radiotherapy treatment planning, and therapeutic effect evaluation. PMID:19175115

  8. Real-time cardiac surface tracking from sparse samples using subspace clustering and maximum-likelihood linear regressors

    NASA Astrophysics Data System (ADS)

    Singh, Vimal; Tewfik, Ahmed H.

    2011-03-01

    Cardiac minimal invasive surgeries such as catheter based radio frequency ablation of atrial fibrillation requires high-precision tracking of inner cardiac surfaces in order to ascertain constant electrode-surface contact. Majority of cardiac motion tracking systems are either limited to outer surface or track limited slices/sectors of inner surface in echocardiography data which are unrealizable in MIS due to the varying resolution of ultrasound with depth and speckle effect. In this paper, a system for high accuracy real-time 3D tracking of both cardiac surfaces using sparse samples of outer-surface only is presented. This paper presents a novel approach to model cardiac inner surface deformations as simple functions of outer surface deformations in the spherical harmonic domain using multiple maximal-likelihood linear regressors. Tracking system uses subspace clustering to identify potential deformation spaces for outer surfaces and trains ML linear regressors using pre-operative MRI/CT scan based training set. During tracking, sparse-samples from outer surfaces are used to identify the active outer surface deformation space and reconstruct outer surfaces in real-time under least squares formulation. Inner surface is reconstructed using tracked outer surface with trained ML linear regressors. High-precision tracking and robustness of the proposed system are demonstrated through results obtained on a real patient dataset with tracking root mean square error <= (0.23 +/- 0.04)mm and <= (0.30 +/- 0.07)mm for outer & inner surfaces respectively.

  9. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  10. The birth of a dinosaur footprint: Subsurface 3D motion reconstruction and discrete element simulation reveal track ontogeny

    PubMed Central

    2014-01-01

    Locomotion over deformable substrates is a common occurrence in nature. Footprints represent sedimentary distortions that provide anatomical, functional, and behavioral insights into trackmaker biology. The interpretation of such evidence can be challenging, however, particularly for fossil tracks recovered at bedding planes below the originally exposed surface. Even in living animals, the complex dynamics that give rise to footprint morphology are obscured by both foot and sediment opacity, which conceals animal–substrate and substrate–substrate interactions. We used X-ray reconstruction of moving morphology (XROMM) to image and animate the hind limb skeleton of a chicken-like bird traversing a dry, granular material. Foot movement differed significantly from walking on solid ground; the longest toe penetrated to a depth of ∼5 cm, reaching an angle of 30° below horizontal before slipping backward on withdrawal. The 3D kinematic data were integrated into a validated substrate simulation using the discrete element method (DEM) to create a quantitative model of limb-induced substrate deformation. Simulation revealed that despite sediment collapse yielding poor quality tracks at the air–substrate interface, subsurface displacements maintain a high level of organization owing to grain–grain support. Splitting the substrate volume along “virtual bedding planes” exposed prints that more closely resembled the foot and could easily be mistaken for shallow tracks. DEM data elucidate how highly localized deformations associated with foot entry and exit generate specific features in the final tracks, a temporal sequence that we term “track ontogeny.” This combination of methodologies fosters a synthesis between the surface/layer-based perspective prevalent in paleontology and the particle/volume-based perspective essential for a mechanistic understanding of sediment redistribution during track formation. PMID:25489092

  11. Lagrangian 3D particle tracking in high-speed flows: Shake-The-Box for multi-pulse systems

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Schanz, Daniel; Reuther, Nico; Kähler, Christian J.; Schröder, Andreas

    2016-08-01

    The Shake-The-Box (STB) particle tracking technique, recently introduced for time-resolved 3D particle image velocimetry (PIV) images, is applied here to data from a multi-pulse investigation of a turbulent boundary layer flow with adverse pressure gradient in air at 36 m/s ( Re τ = 10,650). The multi-pulse acquisition strategy allows for the recording of four-pulse long time-resolved sequences with a time separation of a few microseconds. The experimental setup consists of a dual-imaging system and a dual-double-cavity laser emitting orthogonal polarization directions to separate the four pulses. The STB particle triangulation and tracking strategy is adapted here to cope with the limited amount of realizations available along the time sequence and to take advantage of the ghost track reduction offered by the use of two independent imaging systems. Furthermore, a correction scheme to compensate for camera vibrations is discussed, together with a method to accurately identify the position of the wall within the measurement domain. Results show that approximately 80,000 tracks can be instantaneously reconstructed within the measurement volume, enabling the evaluation of both dense velocity fields, suitable for spatial gradients evaluation, and highly spatially resolved boundary layer profiles. Turbulent boundary layer profiles obtained from ensemble averaging of the STB tracks are compared to results from 2D-PIV and long-range micro particle tracking velocimetry; the comparison shows the capability of the STB approach in delivering accurate results across a wide range of scales.

  12. Towards a magnetic localization system for 3-D tracking of tongue movements in speech-language therapy.

    PubMed

    Cheng, Chihwen; Huo, Xueliang; Ghovanloo, Maysam

    2009-01-01

    This paper presents a new magnetic localization system based on a compact triangular sensor setup and three different optimization algorithms, intended for tracking tongue motion in the 3-D oral space. A small permanent magnet, secured on the tongue by tissue adhesives, will be used as a tracer. The magnetic field variations due to tongue motion are detected by a 3-D magneto-inductive sensor array outside the mouth and wirelessly transmitted to a computer. The position and rotation angles of the tracer are reconstructed based on sensor outputs and magnetic dipole equation using DIRECT, Powell, and Nelder-Mead optimization algorithms. Localization accuracy and processing time of the three algorithms are compared using one data set collected in which source-sensor distance was changed from 40 to 150 mm. Powell algorithm showed the best performance with 0.92 mm accuracy in position and 0.7(o) in orientation. The average processing time was 43.9 ms/sample, which can satisfy real time tracking up to approximately 20 Hz. PMID:19964478

  13. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks

    SciTech Connect

    Nishimura, K.; Dey, B.; Aston, D.; Leith, D.W.G.S.; Ratcliff, B.; Roberts, D.; Ruckman, L.; Shtol, D.; Varner, G.S.; Va'vra, J.; Vavra, Jerry; /SLAC

    2012-07-30

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from {approx}450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of {approx}2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with {approx}1.5 mrad angular resolution and muon energy of E{sub muon} > 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  14. A Detailed Study of FDIRC Prototype with Waveform Digitizing Electronics in Cosmic Ray Telescope Using 3D Tracks.

    SciTech Connect

    Nishimura, K

    2012-07-01

    We present a detailed study of a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC) with waveform digitizing electronics. In this test study, the FDIRC prototype has been instrumented with seven Hamamatsu H-8500 MaPMTs. Waveforms from ~450 pixels are digitized with waveform sampling electronics based on the BLAB2 ASIC, operating at a sampling speed of ~2.5 GSa/s. The FDIRC prototype was tested in a large cosmic ray telescope (CRT) providing 3D muon tracks with ~1.5 mrad angular resolution and muon energy of Emuon greater than 1.6 GeV. In this study we provide a detailed analysis of the tails in the Cherenkov angle distribution as a function of various variables, compare experimental results with simulation, and identify the major contributions to the tails. We demonstrate that to see the full impact of these tails on the Cherenkov angle resolution, it is crucial to use 3D tracks, and have a full understanding of the role of ambiguities. These issues could not be fully explored in previous FDIRC studies where the beam was perpendicular to the quartz radiator bars. This work is relevant for the final FDIRC prototype of the PID detector at SuperB, which will be tested this year in the CRT setup.

  15. Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)

    NASA Astrophysics Data System (ADS)

    Vasquez, Juan R.; Tarplee, Kyle M.; Case, Ellen E.; Zelnio, Anne M.; Rigling, Brian D.

    2008-04-01

    A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with preliminary results from three of the sensors.

  16. Ultra-high-speed 3D astigmatic particle tracking velocimetry: application to particle-laden supersonic impinging jets

    NASA Astrophysics Data System (ADS)

    Buchmann, N. A.; Cierpka, C.; Kähler, C. J.; Soria, J.

    2014-11-01

    The paper demonstrates ultra-high-speed three-component, three-dimensional (3C3D) velocity measurements of micron-sized particles suspended in a supersonic impinging jet flow. Understanding the dynamics of individual particles in such flows is important for the design of particle impactors for drug delivery or cold gas dynamic spray processing. The underexpanded jet flow is produced via a converging nozzle, and micron-sized particles ( d p = 110 μm) are introduced into the gas flow. The supersonic jet impinges onto a flat surface, and the particle impact velocity and particle impact angle are studied for a range of flow conditions and impingement distances. The imaging system consists of an ultra-high-speed digital camera (Shimadzu HPV-1) capable of recording rates of up to 1 Mfps. Astigmatism particle tracking velocimetry (APTV) is used to measure the 3D particle position (Cierpka et al., Meas Sci Technol 21(045401):13, 2010) by coding the particle depth location in the 2D images by adding a cylindrical lens to the high-speed imaging system. Based on the reconstructed 3D particle positions, the particle trajectories are obtained via a higher-order tracking scheme that takes advantage of the high temporal resolution to increase robustness and accuracy of the measurement. It is shown that the particle velocity and impingement angle are affected by the gas flow in a manner depending on the nozzle pressure ratio and stand-off distance where higher pressure ratios and stand-off distances lead to higher impact velocities and larger impact angles.

  17. Aref's chaotic orbits tracked by a general ellipsoid using 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Shui, Pei; Popinet, Stéphane; Govindarajan, Rama; Valluri, Prashant

    2015-11-01

    The motion of an ellipsoidal solid in an ideal fluid has been shown to be chaotic (Aref, 1993) under the limit of non-integrability of Kirchhoff's equations (Kozlov & Oniscenko, 1982). On the other hand, the particle could stop moving when the damping viscous force is strong enough. We present numerical evidence using our in-house immersed solid solver for 3D chaotic motion of a general ellipsoidal solid and suggest criteria for triggering such motion. Our immersed solid solver functions under the framework of the Gerris flow package of Popinet et al. (2003). This solver, the Gerris Immersed Solid Solver (GISS), resolves 6 degree-of-freedom motion of immersed solids with arbitrary geometry and number. We validate our results against the solution of Kirchhoff's equations. The study also shows that the translational/ rotational energy ratio plays the key role on the motion pattern, while the particle geometry and density ratio between the solid and fluid also have some influence on the chaotic behaviour. Along with several other benchmark cases for viscous flows, we propose prediction of chaotic Aref's orbits as a key benchmark test case for immersed boundary/solid solvers.

  18. Readily Accessible Multiplane Microscopy: 3D Tracking the HIV-1 Genome in Living Cells.

    PubMed

    Itano, Michelle S; Bleck, Marina; Johnson, Daniel S; Simon, Sanford M

    2016-02-01

    Human immunodeficiency virus (HIV)-1 infection and the associated disease AIDS are a major cause of human death worldwide with no vaccine or cure available. The trafficking of HIV-1 RNAs from sites of synthesis in the nucleus, through the cytoplasm, to sites of assembly at the plasma membrane are critical steps in HIV-1 viral replication, but are not well characterized. Here we present a broadly accessible microscopy method that captures multiple focal planes simultaneously, which allows us to image the trafficking of HIV-1 genomic RNAs with high precision. This method utilizes a customization of a commercial multichannel emission splitter that enables high-resolution 3D imaging with single-macromolecule sensitivity. We show with high temporal and spatial resolution that HIV-1 genomic RNAs are most mobile in the cytosol, and undergo confined mobility at sites along the nuclear envelope and in the nucleus and nucleolus. These provide important insights regarding the mechanism by which the HIV-1 RNA genome is transported to the sites of assembly of nascent virions. PMID:26567131

  19. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    DOE PAGESBeta

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; Gable, Carl W.; Karra, Satish

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates massmore » balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.« less

  20. Particle tracking approach for transport in three-dimensional discrete fracture networks: Particle tracking in 3-D DFNs

    SciTech Connect

    Makedonska, Nataliia; Painter, Scott L.; Bui, Quan M.; Gable, Carl W.; Karra, Satish

    2015-09-16

    The discrete fracture network (DFN) model is a method to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. We present a new particle tracking capability, which is adapted to control volume (Voronoi polygons) flow solutions on unstructured grids (Delaunay triangulations) on three-dimensional DFNs. The locally mass-conserving finite-volume approach eliminates mass balance-related problems during particle tracking. The scalar fluxes calculated for each control volume face by the flow solver are used to reconstruct a Darcy velocity at each control volume centroid. The groundwater velocities can then be continuously interpolated to any point in the domain of interest. The control volumes at fracture intersections are split into four pieces, and the velocity is reconstructed independently on each piece, which results in multiple groundwater velocities at the intersection, one for each fracture on each side of the intersection line. This technique enables detailed particle transport representation through a complex DFN structure. Verified for small DFNs, the new simulation capability enables numerical experiments on advective transport in large DFNs to be performed. As a result, we demonstrate this particle transport approach on a DFN model using parameters similar to those of crystalline rock at a proposed geologic repository for spent nuclear fuel in Forsmark, Sweden.

  1. DLP technology application: 3D head tracking and motion correction in medical brain imaging

    NASA Astrophysics Data System (ADS)

    Olesen, Oline V.; Wilm, Jakob; Paulsen, Rasmus R.; Højgaard, Liselotte; Larsen, Rasmus

    2014-03-01

    In this paper we present a novel sensing system, robust Near-infrared Structured Light Scanning (NIRSL) for three-dimensional human model scanning application. Human model scanning due to its nature of various hair and dress appearance and body motion has long been a challenging task. Previous structured light scanning methods typically emitted visible coded light patterns onto static and opaque objects to establish correspondence between a projector and a camera for triangulation. In the success of these methods rely on scanning objects with proper reflective surface for visible light, such as plaster, light colored cloth. Whereas for human model scanning application, conventional methods suffer from low signal to noise ratio caused by low contrast of visible light over the human body. The proposed robust NIRSL, as implemented with the near infrared light, is capable of recovering those dark surfaces, such as hair, dark jeans and black shoes under visible illumination. Moreover, successful structured light scan relies on the assumption that the subject is static during scanning. Due to the nature of body motion, it is very time sensitive to keep this assumption in the case of human model scan. The proposed sensing system, by utilizing the new near-infrared capable high speed LightCrafter DLP projector, is robust to motion, provides accurate and high resolution three-dimensional point cloud, making our system more efficient and robust for human model reconstruction. Experimental results demonstrate that our system is effective and efficient to scan real human models with various dark hair, jeans and shoes, robust to human body motion and produces accurate and high resolution 3D point cloud.

  2. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    NASA Technical Reports Server (NTRS)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster

  3. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  4. Tracking magmatic intrusions in real-time by means of free-shaped volcanic source modelling

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; Scandura, Danila; González, Pablo J.; Mattia, Mario; Fernández, José

    2014-05-01

    Nowadays continuous measurements of geophysical parameters provide a general real-time view of current state of the volcano. Nonetheless, a current challenge is to localize and track in real-time the evolution of the magma source beneath the volcano. Here we present a new methodology to rapidly estimate magmatic sources from surface geodetic data and track their evolution in time without any a priori assumption about source geometry. Indeed, the proposed approach takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). In this work we show an application of the methodology to model the real-time evolution of the volcanic source for 2008 eruption of Mount Etna (Italy). To this aim the High-Rate GPS data, coming from the Continuous GPS network, are processed in real-time to obtain sub-daily solutions for tracking the fast dynamics of the magma migration. In our test case we reproduced the real-time scenario of the eruption. Though the data of the test were processed after data collection, real-time operation was emulated. From the results, it is possible to extrapolate the dynamic of a deep and a shallow magma source and the dyke intrusion. In particular, results show at 5 am UTC a magma batch likely migrating towards the surface leaving behind a deflating volume at about 2 km bsl and a deep elongated body from 2 km bsl to 10 km bsl which runs along the High Vp Body and likely represents the deep conduit from where the magma rises up. We demonstrate that the proposed methodology is

  5. Compact real-time image processor for moving object tracking

    NASA Astrophysics Data System (ADS)

    Kinoshita, Noboru

    1996-03-01

    Latency time and hardware compactness are two important problems of real-time image processors for moving object tracking. We have developed a compact self-contained real-time image processor that is implemented on a single double-height VME board. The processor can execute major processing steps for moving object tacking during a single video field time. These steps are preprocessing, binarizing, labeling, feature extraction, and feature evaluation. We can obtain sorted feature vectors simultaneously when image data is read out from a sensor. Here a feature vector represents areas, centroid, and maximum intensity of each connected region in a binarized image. Some conventional image processors can execute the above steps individually in real-time and thread some steps in a pixel pipeline manner. However it is difficult to integrate feature extraction and feature evaluation in a pixel pipeline path. For real-time execution of all steps we focused on new architecture particularly for the latter three steps. To minimize the hardware we have developed three ASICs: labeler, feature accumulator, and sorter. To make our processor self-contained and scalable, it has an on- board micro processor, a digital video bus interface, and an RS232C port, and it is VME compatible in bus interface and mechanical dimension.

  6. Method for dose-reduced 3D catheter tracking on a scanning-beam digital x-ray system using dynamic electronic collimation

    NASA Astrophysics Data System (ADS)

    Dunkerley, David A. P.; Funk, Tobias; Speidel, Michael A.

    2016-03-01

    Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D catheter tracking. This work proposes a method of dose-reduced 3D tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. Positions in the 2D focal spot array are selectively activated to create a regionof- interest (ROI) x-ray field around the tracked catheter. The ROI position is updated for each frame based on a motion vector calculated from the two most recent 3D tracking results. The technique was evaluated with SBDX data acquired as a catheter tip inside a chest phantom was pulled along a 3D trajectory. DEC scans were retrospectively generated from the detector images stored for each focal spot position. DEC imaging of a catheter tip in a volume measuring 11.4 cm across at isocenter required 340 active focal spots per frame, versus 4473 spots in full-FOV mode. The dose-area-product (DAP) and peak skin dose (PSD) for DEC versus full field-of-view (FOV) scanning were calculated using an SBDX Monte Carlo simulation code. DAP was reduced to 7.4% to 8.4% of the full-FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full-FOV value. The root-mean-squared-deviation between DEC-based 3D tracking coordinates and full-FOV 3D tracking coordinates was less than 0.1 mm. The 3D distance between the tracked tip and the sheath centerline averaged 0.75 mm. Dynamic electronic collimation can reduce dose with minimal change in tracking performance.

  7. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging

    NASA Astrophysics Data System (ADS)

    Reichelt, Stephan; Leister, Norbert

    2013-02-01

    In dynamic computer-generated holography that utilizes spatial light modulators, both hologram synthesis and hologram representation are essential in terms of fast computation and high reconstruction quality. For hologram synthesis, i.e. the computation step, Fresnel transform based or point-source based raytracing methods can be applied. In the encoding step, the complex wave-field has to be optimally represented by the SLM with its given modulation capability. For proper hologram reconstruction that implies a simultaneous and independent amplitude and phase modulation of the input wave-field by the SLM. In this paper, we discuss full complex hologram representation methods on SLMs by considering inherent SLM parameter such as modulation type and bit depth on their reconstruction performance such as diffraction efficiency and SNR. We review the three implementation schemes of Burckhardt amplitude-only representation, phase-only macro-pixel representation, and two-phase interference representation. Besides the optical performance we address their hardware complexity and required computational load. Finally, we experimentally demonstrate holographic reconstructions of different representation schemes as obtained by functional prototypes utilizing SeeReal's viewing-window holographic display technology. The proposed hardware implementations enable a fast encoding of complex-valued hologram data and thus will pave the way for commercial real-time holographic 3D imaging in the near future.

  8. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    NASA Astrophysics Data System (ADS)

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-06-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments).

  9. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem

    PubMed Central

    McClay, Wilbert A.; Yadav, Nancy; Ozbek, Yusuf; Haas, Andy; Attias, Hagaii T.; Nagarajan, Srikantan S.

    2015-01-01

    Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI) for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG) brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user’s intent for specific keyboard strikes or mouse button presses. The BCI’s data analytics of a subject’s MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse. PMID:26437432

  10. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys

    PubMed Central

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-01-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments). PMID:27302087

  11. Real-time microstructure imaging by Laue microdiffraction: A sample application in laser 3D printed Ni-based superalloys.

    PubMed

    Zhou, Guangni; Zhu, Wenxin; Shen, Hao; Li, Yao; Zhang, Anfeng; Tamura, Nobumichi; Chen, Kai

    2016-01-01

    Synchrotron-based Laue microdiffraction has been widely applied to characterize the local crystal structure, orientation, and defects of inhomogeneous polycrystalline solids by raster scanning them under a micro/nano focused polychromatic X-ray probe. In a typical experiment, a large number of Laue diffraction patterns are collected, requiring novel data reduction and analysis approaches, especially for researchers who do not have access to fast parallel computing capabilities. In this article, a novel approach is developed by plotting the distributions of the average recorded intensity and the average filtered intensity of the Laue patterns. Visualization of the characteristic microstructural features is realized in real time during data collection. As an example, this method is applied to image key features such as microcracks, carbides, heat affected zone, and dendrites in a laser assisted 3D printed Ni-based superalloy, at a speed much faster than data collection. Such analytical approach remains valid for a wide range of crystalline solids, and therefore extends the application range of the Laue microdiffraction technique to problems where real-time decision-making during experiment is crucial (for instance time-resolved non-reversible experiments). PMID:27302087

  12. Progress in Ring Array Transducers for Real-Time 3D Ultrasound Guidance of Cardiac Interventional Devices

    PubMed Central

    Light, Edward D.; Lieu, Victor; Suhocki, Paul; Wolf, Patrick D.; Smith, Stephen W.

    2012-01-01

    As a treatment for aortic stenosis, several companies have recently introduced prosthetic heart valves designed to be deployed through a catheter using an intravenous or trans-apical approach. This procedure can either take the place of open heart surgery with some of the devices, or delay it with others. Real-time 3D ultrasound could enable continuous monitoring of these structures before, during and after deployment. We have developed a 2D ring array integrated with a 30 French catheter that is used for trans-apical prosthetic heart valve implantation. The transducer array was built using three 46 cm long flex circuits from MicroConnex (Snoqualmie, WA) which terminate in an interconnect that plugs directly into our system cable, thus no cable soldering is required. This transducer consists of 210 elements at .157 mm inter-element spacing and operates at 5 MHz. Average measured element bandwidth was 26% and average round-trip 50 Ohm insertion loss was -81.1 dB. The transducer were wrapped around the 1 cm diameter lumen of a heart valve deployment catheter. Prosthetic heart valve images were obtained in water tank studies. PMID:21842583

  13. Quantitative real-time detection of carcinoembryonic antigen (CEA) from pancreatic cyst fluid using 3-D surface molecular imprinting.

    PubMed

    Yu, Yingjie; Zhang, Qi; Buscaglia, Jonathan; Chang, Chung-Chueh; Liu, Ying; Yang, Zhenhua; Guo, Yichen; Wang, Yantian; Levon, Kalle; Rafailovich, Miriam

    2016-07-21

    In this study, a sensitive, yet robust, biosensing system with real-time electrochemical readout was developed. The biosensor system was applied to the detection of carcinoembryonic antigen (CEA), which is a common marker for many cancers such as pancreatic, breast, and colon cancer. Real time detection of CEA during a medical procedure can be used to make critical decisions regarding further surgical intervention. CEA was templated on gold surface (RMS roughness ∼3-4 nm) coated with a hydrophilic self-assembled monolayer (SAM) on the working electrode of an open circuit potentiometric network. The subsequent removal of template CEA makes the biosensor capable of CEA detection based on its specific structure and conformation. The molecular imprinting (MI) biosensor was further calibrated using the potentiometric responses in solutions with known CEA concentrations and a detection limit of 0.5 ng ml(-1) was achieved. Potentiometric sensing was then applied to pancreatic cyst fluid samples obtained from 18 patients when the cyst fluid was also evaluated using ELISA in a certified pathology laboratory. Excellent agreement was obtained between the quantitation of CEA obtained by both the ELISA and MI biosensor detection for CEA. A 3-D MI model, using the natural rms roughness of PVD gold layers, is presented to explain the high degree of sensitivity and linearity observed in those experiments. PMID:27193921

  14. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem.

    PubMed

    McClay, Wilbert A; Yadav, Nancy; Ozbek, Yusuf; Haas, Andy; Attias, Hagaii T; Nagarajan, Srikantan S

    2015-01-01

    Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI) for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG) brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user's intent for specific keyboard strikes or mouse button presses. The BCI's data analytics OPEN ACCESS Brain. Sci. 2015, 5 420 of a subject's MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse. PMID:26437432

  15. Real-time and post-frac' 3-D analysis of hydraulic fracture treatments in geothermal reservoirs

    SciTech Connect

    Wright, C.A.; Tanigawa, J.J.; Hyodo, Masami; Takasugi, Shinji

    1994-01-20

    Economic power production from Hot Dry Rock (HDR) requires the establishment of an efficient circulation system between wellbores in reservoir rock with extremely low matrix permeability. Hydraulic fracturing is employed to establish the necessary circulation system. Hydraulic fracturing has also been performed to increase production from hydrothermal reservoirs by enhancing the communication with the reservoir's natural fracture system. Optimal implementation of these hydraulic fracturing applications, as with any engineering application, requires the use of credible physical models and the reconciliation of the physical models with treatment data gathered in the field. Analysis of the collected data has shown that 2-D models and 'conventional' 3-D models of the hydraulic fracturing process apply very poorly to hydraulic fracturing in geothermal reservoirs. Engineering decisions based on these more 'conventional' fracture modeling techniques lead to serious errors in predicting the performance of hydraulic fracture treatments. These errors can lead to inappropriate fracture treatment design as well as grave errors in well placement for hydrothermal reservoirs or HDR reservoirs. This paper outlines the reasons why conventional modeling approaches fall short, and what types of physical models are needed to credibly estimate created hydraulic fracture geometry. The methodology of analyzing actual measured fracture treatment data and matching the observed net fracturing pressure (in realtime as well as after the treatment) is demonstrated at two separate field sites. Results from an extensive Acoustic Emission (AE) fracture diagnostic survey are also presented for the first case study aS an independent measure of the actual created hydraulic fracture geometry.

  16. Distributed collaborative environment with real-time tracking of 3D body postures

    NASA Astrophysics Data System (ADS)

    Alisi, Thomas M.; Del Bimbo, Alberto; Pucci, Fabio; Valli, Alessandro

    2003-12-01

    In this paper a multi-user motion capture system is presented, where users work from separate locations and interact in a common virtual environment. The system functions well on low-end personal computers; it implements a natural human/machine interaction due to the complete absence of markers and weak constraints on users' clothes and environment lighting. It is suitable for every-day use, where the great precision reached by complex commercial systems is not the principal requisite.

  17. Facial image tracking system architecture utilizing real-time labeling

    NASA Astrophysics Data System (ADS)

    Fujino, Yuichi; Ogura, Takeshi; Tsuchiya, Toshiaki

    1993-10-01

    This paper proposes a new moving-objects tracking method processed by a local spiral labeling with CAM (Content Addressable Memory). The local spiral labeling method was proposed in order to improve one of the shortcomings of TV telephones. The conventional labeling, however, needs huge processing time and a memory capacity in order to compute connecting relations between label numbers. CAM has some functions to search and write the plural contents of the memory at the same time. CAM is suitable for a real time labeling. This paper shows a new labeling algorithm called local spiral labeling, a real-time labeling scheme utilizing CAM, and a prototype system of human head tracking using 0.5 micrometers BiCMOS gate-array technology.

  18. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    NASA Astrophysics Data System (ADS)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  19. A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface

    PubMed Central

    Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue

    2015-01-01

    Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112

  20. Real-time 3D image reconstruction of a 24×24 row-column addressing array: from raw data to image

    NASA Astrophysics Data System (ADS)

    Li, Chunyu; Yang, Jiali; Li, Xu; Zhong, Xiaoli; Song, Junjie; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    This paper presents a work of real-time 3-D image reconstruction for a 7.5-MHz, 24×24 row-column addressing array transducer. The transducer works with a predesigned transmit/receive module. After the raw data are captured by the NI PXIe data acquisition (DAQ) module, the following processing procedures are performed: delay and sum (DAS), base-line calibration, envelope detection, logarithm compression, down-sampling, gray scale mapping and 3-D display. These procedures are optimized for obtaining real-time 3-D images. Fixed-point focusing scheme is applied in delay and sum (DAS) to obtain line data from channel data. Zero-phase high-pass filter is used to calibrate the base-line shift of echo. The classical Hilbert transformation is adopted to detect the envelopes of echo. Logarithm compression is implemented to enlarge the weak signals and narrow the gap from the strong ones. Down-sampling reduces the amount of data to improve the processing speed. Linear gray scale mapping is introduced that the weakest signal is mapped to 0 and the strongest signal 255. The real-time 3-D images are displayed with multi-planar mode, which shows three orthogonal sections (vertical section, coronal section, transverse section). A trigger signal is sent from the transmit/receive module to the DAQ module at the start of each volume data generation to ensure synchronization between these two modules. All procedures, include data acquisition (DAQ), signal processing and image display, are programmed on the platform of LabVIEW. 675MB raw echo data are acquired in one minute to generate 24×24×48, 27fps 3-D images. The experiment on the strong reflection object (aluminum slice) shows the feasibility of the whole process from raw data to real-time 3-D images.

  1. Real-Time Tumor Tracking in the Lung Using an Electromagnetic Tracking System

    SciTech Connect

    Shah, Amish P.; Kupelian, Patrick A.; Waghorn, Benjamin J.; Willoughby, Twyla R.; Rineer, Justin M.; Mañon, Rafael R.; Vollenweider, Mark A.; Meeks, Sanford L.

    2013-07-01

    Purpose: To describe the first use of the commercially available Calypso 4D Localization System in the lung. Methods and Materials: Under an institutional review board-approved protocol and an investigational device exemption from the US Food and Drug Administration, the Calypso system was used with nonclinical methods to acquire real-time 4-dimensional lung tumor tracks for 7 lung cancer patients. The aims of the study were to investigate (1) the potential for bronchoscopic implantation; (2) the stability of smooth-surface beacon transponders (transponders) after implantation; and (3) the ability to acquire tracking information within the lung. Electromagnetic tracking was not used for any clinical decision making and could only be performed before any radiation delivery in a research setting. All motion tracks for each patient were reviewed, and values of the average displacement, amplitude of motion, period, and associated correlation to a sinusoidal model (R{sup 2}) were tabulated for all 42 tracks. Results: For all 7 patients at least 1 transponder was successfully implanted. To assist in securing the transponder at the tumor site, it was necessary to implant a secondary fiducial for most transponders owing to the transponder's smooth surface. For 3 patients, insertion into the lung proved difficult, with only 1 transponder remaining fixed during implantation. One patient developed a pneumothorax after implantation of the secondary fiducial. Once implanted, 13 of 14 transponders remained stable within the lung and were successfully tracked with the tracking system. Conclusions: Our initial experience with electromagnetic guidance within the lung demonstrates that transponder implantation and tracking is achievable though not clinically available. This research investigation proved that lung tumor motion exhibits large variations from fraction to fraction within a single patient and that improvements to both transponder and tracking system are still necessary

  2. Real-time tracking using trust-region methods.

    PubMed

    Liu, Tyng-Luh; Chen, Hwann-Tzong

    2004-03-01

    Optimization methods based on iterative schemes can be divided into two classes: line-search methods and trust-region methods. While line-search techniques are commonly found in various vision applications, not much attention is paid to trust-region ones. Motivated by the fact that line-search methods can be considered as special cases of trust-region methods, we propose to establish a trust-region framework for real-time tracking. Our approach is characterized by three key contributions. First, since a trust-region tracking system is more effective, it often yields better performances than the outcomes of other trackers that rely on iterative optimization to perform tracking, e.g., a line-search-based mean-shift tracker. Second, we have formulated a representation model that uses two coupled weighting schemes derived from the covariance ellipse to integrate an object's color probability distribution and edge density information. As a result, the system can address rotation and nonuniform scaling in a continuous space, rather than working on some presumably possible discrete values of rotation angle and scale. Third, the framework is very flexible in that a variety of distance functions can be adapted easily. Experimental results and comparative studies are provided to demonstrate the efficiency of the proposed method. PMID:15376885

  3. Robust real-time instrument tracking in ultrasound images

    NASA Astrophysics Data System (ADS)

    Ortmaier, Tobias; Vitrani, Marie-Aude; Morel, Guillaume; Pinault, Samuel

    2005-04-01

    Minimally invasive surgery in combination with ultrasound (US) imaging imposes high demands on the surgeon's hand-eye-coordination capabilities. A possible solution to reduce these requirements is minimally invasive robotic surgery in which the instrument is guided by visual servoing towards the goal defined by the surgeon in the US image. This approach requires robust tracking of the instrument in the US image sequences which is known to be difficult due to poor image quality. This paper presents algorithms and results of first tracking experiments. Adaptive thresholding based on Otsu's method allows to cope with large intensity variations of the instrument echo. Median filtering of the binary image and subsequently applied morphological operations suppress noise and echo artefacts. A fast run length code based labelling algorithm allows for real-time labelling of the regions. A heuristic exploiting region size and region velocity helps to overcome ambiguities. The overall computation time is less than 20 ms per frame on a standard PC. The tracking algorithm requires no information about texture and shape which are known to be very unreliable in US image sequences. Experimental results for two different instrument materials (polyvinyl chloride and polyurethane) are given, showing the performance of the proposed approach. Choosing the appropriate material, trajectories are smooth and only few outliers occur.

  4. Breakup of Finite-Size Colloidal Aggregates in Turbulent Flow Investigated by Three-Dimensional (3D) Particle Tracking Velocimetry.

    PubMed

    Saha, Debashish; Babler, Matthaus U; Holzner, Markus; Soos, Miroslav; Lüthi, Beat; Liberzon, Alex; Kinzelbach, Wolfgang

    2016-01-12

    Aggregates grown in mild shear flow are released, one at a time, into homogeneous isotropic turbulence, where their motion and intermittent breakup is recorded by three-dimensional particle tracking velocimetry (3D-PTV). The aggregates have an open structure with a fractal dimension of ∼2.2, and their size is 1.4 ± 0.4 mm, which is large, compared to the Kolmogorov length scale (η = 0.15 mm). 3D-PTV of flow tracers allows for the simultaneous measurement of aggregate trajectories and the full velocity gradient tensor along their pathlines, which enables us to access the Lagrangian stress history of individual breakup events. From this data, we found no consistent pattern that relates breakup to the local flow properties at the point of breakup. Also, the correlation between the aggregate size and both shear stress and normal stress at the location of breakage is found to be weaker, when compared with the correlation between size and drag stress. The analysis suggests that the aggregates are mostly broken due to the accumulation of the drag stress over a time lag on the order of the Kolmogorov time scale. This finding is explained by the fact that the aggregates are large, which gives their motion inertia and increases the time for stress propagation inside the aggregate. Furthermore, it is found that the scaling of the largest fragment and the accumulated stress at breakup follows an earlier established power law, i.e., dfrag ∼ σ(-0.6) obtained from laminar nozzle experiments. This indicates that, despite the large size and the different type of hydrodynamic stress, the microscopic mechanism causing breakup is consistent over a wide range of aggregate size and stress magnitude. PMID:26646289

  5. Exploring single-molecule interactions through 3D optical trapping and tracking: From thermal noise to protein refolding

    NASA Astrophysics Data System (ADS)

    Wong, Wesley Philip

    The focus of this thesis is the development and application of a novel technique for investigating the structure and dynamics of weak interactions between and within single-molecules. This approach is designed to explore unusual features in bi-directional transitions near equilibrium. The basic idea is to infer molecular events by observing changes in the three-dimensional Brownian fluctuations of a functionalized microsphere held weakly near a reactive substrate. Experimentally, I have developed a unique optical tweezers system that combines an interference technique for accurate 3D tracking (˜1 nm vertically, and ˜2-3 nm laterally) with a continuous autofocus system which stabilizes the trap height to within 1-2 mn over hours. A number of different physical and biological systems were investigated with this instrument. Data interpretation was assisted by a multi-scale Brownian Dynamics simulation that I have developed. I have explored the 3D signatures of different molecular tethers, distinguishing between single and multiple attachments, as well as between stiff and soft linkages. As well, I have developed a technique for measuring the force-dependent compliance of molecular tethers from thermal noise fluctuations and demonstrated this with a short ssDNA oligomer. Another practical approach that I have developed for extracting information from fluctuation measurements is Inverse Brownian Dynamics, which yields the underlying potential of mean force and position dependent diffusion coefficient from the Brownian motion of a particle. I have also developed a new force calibration method that takes into account video motion blur, and that uses this information to measure bead dynamics. Perhaps most significantly, I have trade the first direct observations of the refolding of spectrin repeats under mechanical force, and investigated the force-dependent kinetics of this transition.

  6. Quantification of Shunt Volume Through Ventricular Septal Defect by Real-Time 3-D Color Doppler Echocardiography: An in Vitro Study.

    PubMed

    Zhu, Meihua; Ashraf, Muhammad; Tam, Lydia; Streiff, Cole; Kimura, Sumito; Shimada, Eriko; Sahn, David J

    2016-05-01

    Quantification of shunt volume is important for ventricular septal defects (VSDs). The aim of the in vitro study described here was to test the feasibility of using real-time 3-D color Doppler echocardiography (RT3-D-CDE) to quantify shunt volume through a modeled VSD. Eight porcine heart phantoms with VSDs ranging in diameter from 3 to 25 mm were studied. Each phantom was passively driven at five different stroke volumes from 30 to 70 mL and two stroke rates, 60 and 120 strokes/min. RT3-D-CDE full volumes were obtained at color Doppler volume rates of 15, 20 and 27 volumes/s. Shunt flow derived from RT3-D-CDE was linearly correlated with pump-driven stroke volume (R = 0.982). RT3-D-CDE-derived shunt volumes from three color Doppler flow rate settings and two stroke rate acquisitions did not differ (p > 0.05). The use of RT3-D-CDE to determine shunt volume though VSDs is feasible. Different color volume rates/heart rates under clinically/physiologically relevant range have no effect on VSD 3-D shunt volume determination. PMID:26850842

  7. Real-Time 3D Fluoroscopy-Guided Large Core Needle Biopsy of Renal Masses: A Critical Early Evaluation According to the IDEAL Recommendations

    SciTech Connect

    Kroeze, Stephanie G. C.; Huisman, Merel; Verkooijen, Helena M.; Diest, Paul J. van; Ruud Bosch, J. L. H.; Bosch, Maurice A. A. J. van den

    2012-06-15

    Introduction: Three-dimensional (3D) real-time fluoroscopy cone beam CT is a promising new technique for image-guided biopsy of solid tumors. We evaluated the technical feasibility, diagnostic accuracy, and complications of this technique for guidance of large-core needle biopsy in patients with suspicious renal masses. Methods: Thirteen patients with 13 suspicious renal masses underwent large-core needle biopsy under 3D real-time fluoroscopy cone beam CT guidance. Imaging acquisition and subsequent 3D reconstruction was done by a mobile flat-panel detector (FD) C-arm system to plan the needle path. Large-core needle biopsies were taken by the interventional radiologist. Technical success, accuracy, and safety were evaluated according to the Innovation, Development, Exploration, Assessment, Long-term study (IDEAL) recommendations. Results: Median tumor size was 2.6 (range, 1.0-14.0) cm. In ten (77%) patients, the histological diagnosis corresponded to the imaging findings: five were malignancies, five benign lesions. Technical feasibility was 77% (10/13); in three patients biopsy results were inconclusive. The lesion size of these three patients was <2.5 cm. One patient developed a minor complication. Median follow-up was 16.0 (range, 6.4-19.8) months. Conclusions: 3D real-time fluoroscopy cone beam CT-guided biopsy of renal masses is feasible and safe. However, these first results suggest that diagnostic accuracy may be limited in patients with renal masses <2.5 cm.

  8. Prospective motion correction of 3D echo-planar imaging data for functional MRI using optical tracking

    PubMed Central

    Todd, Nick; Josephs, Oliver; Callaghan, Martina F.; Lutti, Antoine; Weiskopf, Nikolaus

    2015-01-01

    We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2 × 2 × 3 factorial design with the following factors: PMC on or off; 3.0 mm or 1.5 mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5 mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p < 0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies. PMID:25783205

  9. Real-time skeleton tracking for embedded systems

    NASA Astrophysics Data System (ADS)

    Coleca, Foti; Klement, Sascha; Martinetz, Thomas; Barth, Erhardt

    2013-03-01

    Touch-free gesture technology is beginning to become more popular with consumers and may have a significant future impact on interfaces for digital photography. However, almost every commercial software framework for gesture and pose detection is aimed at either desktop PCs or high-powered GPUs, making mobile implementations for gesture recognition an attractive area for research and development. In this paper we present an algorithm for hand skeleton tracking and gesture recognition that runs on an ARM-based platform (Pandaboard ES, OMAP 4460 architecture). The algorithm uses self-organizing maps to fit a given topology (skeleton) into a 3D point cloud. This is a novel way of approaching the problem of pose recognition as it does not employ complex optimization techniques or data-based learning. After an initial background segmentation step, the algorithm is ran in parallel with heuristics, which detect and correct artifacts arising from insufficient or erroneous input data. We then optimize the algorithm for the ARM platform using fixed-point computation and the NEON SIMD architecture the OMAP4460 provides. We tested the algorithm with two different depth-sensing devices (Microsoft Kinect, PMD Camboard). For both input devices we were able to accurately track the skeleton at the native framerate of the cameras.

  10. Real-Time Bioluminescent Tracking of Cellular Population Dynamics

    SciTech Connect

    Close, Dan; Sayler, Gary Steven; Xu, Tingting; Ripp, Steven Anthony

    2014-01-01

    Cellular population dynamics are routinely monitored across many diverse fields for a variety of purposes. In general, these dynamics are assayed either through the direct counting of cellular aliquots followed by extrapolation to the total population size, or through the monitoring of signal intensity from any number of externally stimulated reporter proteins. While both viable methods, here we describe a novel technique that allows for the automated, non-destructive tracking of cellular population dynamics in real-time. This method, which relies on the detection of a continuous bioluminescent signal produced through expression of the bacterial luciferase gene cassette, provides a low cost, low time-intensive means for generating additional data compared to alternative methods.

  11. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy

    SciTech Connect

    Via, Riccardo Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Ciocca, Mario; Riboldi, Marco; Baroni, Guido; Orecchia, Roberto

    2015-05-15

    Purpose: External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Methods: Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Results: Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. Conclusions: A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring

  12. Simultaneous tracking of 3D actin and microtubule strains in individual MLO-Y4 osteocytes under oscillatory flow.

    PubMed

    Baik, Andrew D; Qiu, Jun; Hillman, Elizabeth M C; Dong, Cheng; Guo, X Edward

    2013-02-22

    Osteocytes in vivo experience complex fluid shear flow patterns to activate mechanotransduction pathways. The actin and microtubule (MT) cytoskeletons have been shown to play an important role in the osteocyte's biochemical response to fluid shear loading. The dynamic nature of physiologically relevant fluid flow profiles (i.e., 1Hz oscillatory flow) impedes the ability to image and study both actin and MT cytoskeletons simultaneously in the same cell with high spatiotemporal resolution. To overcome these limitations, a multi-channel quasi-3D microscopy technique was developed to track the actin and MT networks simultaneously under steady and oscillatory flow. Cells displayed high intercellular variability and intracellular cytoskeletal variability in strain profiles. Shear Exz was the predominant strain in both steady and oscillatory flows in the form of viscoelastic creep and elastic oscillations, respectively. Dramatic differences were seen in oscillatory flow, however. The actin strains displayed an oscillatory strain profile more often than the MT networks in all the strains tested and had a higher peak-to-trough strain magnitude. Taken together, the actin networks are the more responsive cytoskeletal networks in osteocytes under oscillatory flow and may play a bigger role in mechanotransduction pathway activation and regulation. PMID:23352617

  13. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  14. The real-time acquisition and tracking program for the USNS Vanguard

    NASA Technical Reports Server (NTRS)

    Brammer, R. F.

    1974-01-01

    The computer program for the real-time acquisition and tracking program uses a variety of filtering algorithms including an extended Kalman filter to derive real-time orbit determination (position-velocity state vectors) from shipboard tracking and navigation data. Results from Apollo missions are given to show that orbital parameters can be estimated quickly and accurately using these methods.

  15. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  16. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  17. Capturing geometry in real-time using a tracked Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Tenedorio, Daniel; Fecho, Marlena; Schwartzhaupt, Jorge; Pardridge, Robert; Lue, James; Schulze, Jürgen P.

    2012-03-01

    We investigate the suitability of the Microsoft Kinect device for capturing real-world objects and places. Our new geometry scanning system permits the user to obtain detailed triangle models of non-moving objects with a tracked Kinect. The system generates a texture map for the triangle mesh using video frames from the Kinect's color camera and displays a continually-updated preview of the textured model in real-time, allowing the user to re-scan the scene from any direction to fill holes or increase the texture resolution. We also present filtering methods to maintain a high-quality model of reasonable size by removing overlapping or low-precision range scans. Our approach works well in the presence of degenerate geometry or when closing loops about the scanned subject. We demonstrate the ability of our system to acquire 3D models at human scale with a prototype implementation in the StarCAVE, a virtual reality environment at the University of California, San Diego. We designed the capturing algorithm to support the scanning of large areas, provided that accurate tracking is available.

  18. Direct measurement of particle size and 3D velocity of a gas-solid pipe flow with digital holographic particle tracking velocimetry.

    PubMed

    Wu, Yingchun; Wu, Xuecheng; Yao, Longchao; Gréhan, Gérard; Cen, Kefa

    2015-03-20

    The 3D measurement of the particles in a gas-solid pipe flow is of great interest, but remains challenging due to curved pipe walls in various engineering applications. Because of the astigmatism induced by the pipe, concentric ellipse fringes in the hologram of spherical particles are observed in the experiments. With a theoretical analysis of the particle holography by an ABCD matrix, the in-focus particle image can be reconstructed by the modified convolution method and fractional Fourier transform. Thereafter, the particle size, 3D position, and velocity are simultaneously measured by digital holographic particle tracking velocimetry (DHPTV). The successful application of DHPTV to the particle size and 3D velocity measurement in a glass pipe's flow can facilitate its 3D diagnostics. PMID:25968543

  19. High-resolution real-time x-ray and 3D imaging for physical contamination detection in deboned poultry meat

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Jing, Hansong; Tao, Yang; Cheng, Xuemei

    2004-03-01

    This paper describes a novel approach for detection of foreign materials in deboned poultry patties based on real-time imaging technologies. Uneven thickness of poultry patties could lead to a significant classification error in a typical X-ray imaging system, and we addressed this issue successfully by fusing laser range imaging (3D imaging) into the x-ray inspection system. In order for this synergic technology to work effectively for on-line industrial applications, the vision system should be able to identify various physical contaminations automatically and have viable real-time capabilities. To meet these challenges, a rule-based approach was formulated under a unified framework for detection of diversified subjects, and a multithread scheme was developed for real-time image processing. Algorithms of data fusion, feature extraction and pattern classification of this approach are described in this paper. Detection performance and overall throughput of the system are also discussed.

  20. Real-time 3D shape measurement with digital stripe projection by Texas Instruments Micro Mirror Devices DMD

    NASA Astrophysics Data System (ADS)

    Frankowski, Gottfried; Chen, Mai; Huth, Torsten

    2000-03-01

    The fast, contact-free and highly precise shape measurement of technical objects is of key importance in the scientific- technological area as well as the area of practical measurement technology. The application areas of contact- free surface measurement extend across widely different areas, e.g., the automation of production processes, the measurement and inspection of components in microsystem technology or the fast 3D in-vivo measurement of human skin surfaces in cosmetics and medical technology. This paper describes methodological and technological possibilities as well as measurement technology applications for fast optical 3D shape measurements using micromirror-based high-velocity stripe projection. Depending on the available projector and camera facilities, it will be possible to shoot and evaluate compete 3D surface profiles within only a few milliseconds.

  1. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  2. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  3. HSA: integrating multi-track Hi-C data for genome-scale reconstruction of 3D chromatin structure.

    PubMed

    Zou, Chenchen; Zhang, Yuping; Ouyang, Zhengqing

    2016-01-01

    Genome-wide 3C technologies (Hi-C) are being increasingly employed to study three-dimensional (3D) genome conformations. Existing computational approaches are unable to integrate accumulating data to facilitate studying 3D chromatin structure and function. We present HSA ( http://ouyanglab.jax.org/hsa/ ), a flexible tool that jointly analyzes multiple contact maps to infer 3D chromatin structure at the genome scale. HSA globally searches the latent structure underlying different cleavage footprints. Its robustness and accuracy outperform or rival existing tools on extensive simulations and orthogonal experiment validations. Applying HSA to recent in situ Hi-C data, we found the 3D chromatin structures are highly conserved across various human cell types. PMID:26936376

  4. Flying triangulation - A motion-robust optical 3D sensor for the real-time shape acquisition of complex objects

    NASA Astrophysics Data System (ADS)

    Willomitzer, Florian; Ettl, Svenja; Arold, Oliver; Häusler, Gerd

    2013-05-01

    The three-dimensional shape acquisition of objects has become more and more important in the last years. Up to now, there are several well-established methods which already yield impressive results. However, even under quite common conditions like object movement or a complex shaping, most methods become unsatisfying. Thus, the 3D shape acquisition is still a difficult and non-trivial task. We present our measurement principle "Flying Triangulation" which enables a motion-robust 3D acquisition of complex-shaped object surfaces by a freely movable handheld sensor. Since "Flying Triangulation" is scalable, a whole sensor-zoo for different object sizes is presented. Concluding, an overview of current and future fields of investigation is given.

  5. Unstructured grids in 3D and 4D for a time-dependent interface in front tracking with improved accuracy

    SciTech Connect

    Glimm, J.; Grove, J. W.; Li, X. L.; Li, Y.; Xu, Z.

    2002-01-01

    Front tracking traces the dynamic evolution of an interface separating differnt materials or fluid components. In this paper, they describe three types of the grid generation methods used in the front tracking method. One is the unstructured surface grid. The second is a structured grid-based reconstruction method. The third is a time-space grid, also grid based, for a conservative tracking algorithm with improved accuracy.

  6. A real-time skin dose tracking system for biplane neuro-interventional procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay K.; Rudin, Stephen R.; Bednarek, Daniel R.

    2015-03-01

    A biplane dose-tracking system (Biplane-DTS) that provides a real-time display of the skin-dose distribution on a 3D-patient graphic during neuro-interventional fluoroscopic procedures was developed. Biplane-DTS calculates patient skin dose using geometry and exposure information for the two gantries of the imaging system acquired from the digital system bus. The dose is calculated for individual points on the patient graphic surface for each exposure pulse and cumulative dose for both x-ray tubes is displayed as color maps on a split screen showing frontal and lateral projections of a 3D-humanoid graphic. Overall peak skin dose (PSD), FOV-PSD and current dose rates for the two gantries are also displayed. Biplane- TS uses calibration files of mR/mAs for the frontal and lateral tubes measured with and without the table in the beam at the entrance surface of a 20 cm thick PMMA phantom placed 15 cm tube-side of the isocenter. For neuro-imaging, conversion factors are applied as a function of entrance field area to scale the calculated dose to that measured with a Phantom Laboratory head phantom which contains a human skull to account for differences in backscatter between PMMA and the human head. The software incorporates inverse-square correction to each point on the skin and corrects for angulation of the beam through the table. Dose calculated by Biplane DTS and values measured by a 6-cc ionization chamber placed on the head phantom at multiple points agree within a range of -3% to +7% with a standard deviation for all points of less than 3%.

  7. Real-time processor for 3-D information extraction from image sequences by a moving area sensor

    NASA Astrophysics Data System (ADS)

    Hattori, Tetsuo; Nakada, Makoto; Kubo, Katsumi

    1990-11-01

    This paper presents a real time image processor for obtaining threedimensional( 3-D) distance information from image sequence caused by a moving area sensor. The processor has been developed for an automated visual inspection robot system (pilot system) with an autonomous vehicle which moves around avoiding obstacles in a power plant and checks whether there are defects or abnormal phenomena such as steam leakage from valves. The processor detects the distance between objects in the input image and the area sensor deciding corresponding points(pixels) between the first input image and the last one by tracing the loci of edges through the sequence of sixteen images. The hardware which plays an important role is two kinds of boards: mapping boards which can transform X-coordinate (horizontal direction) and Y-coordinate (vertical direction) for each horizontal row of images and a regional labelling board which extracts the connected loci of edges through image sequence. This paper also shows the whole processing flow of the distance detection algorithm. Since the processor can continuously process images ( 512x512x8 [pixels*bits per frame] ) at the NTSC video rate it takes about O. 7[sec] to measure the 3D distance by sixteen input images. The error rate of the measurement is maximum 10 percent when the area sensor laterally moves the range of 20 [centimeters] and when the measured scene including complicated background is at a distance of 4 [meters] from

  8. Real-time 3D millimeter wave imaging based FMCW using GGD focal plane array as detectors

    NASA Astrophysics Data System (ADS)

    Levanon, Assaf; Rozban, Daniel; Kopeika, Natan S.; Yitzhaky, Yitzhak; Abramovich, Amir

    2014-03-01

    Millimeter wave (MMW) imaging systems are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is relatively low. The lack of inexpensive room temperature imaging systems makes it difficult to give a suitable MMW system for many of the above applications. 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with a Glow Discharge Detector (GDD) Focal Plane Array (FPA) of plasma based detectors. Each point on the object corresponds to a point in the image and includes the distance information. This will enable 3D MMW imaging. The radar system requires that the millimeter wave detector (GDD) will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the image. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of GDD devices. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  9. C-ME: A 3D Community-Based, Real-Time Collaboration Tool for Scientific Research and Training

    PubMed Central

    Kolatkar, Anand; Kennedy, Kevin; Halabuk, Dan; Kunken, Josh; Marrinucci, Dena; Bethel, Kelly; Guzman, Rodney; Huckaby, Tim; Kuhn, Peter

    2008-01-01

    The need for effective collaboration tools is growing as multidisciplinary proteome-wide projects and distributed research teams become more common. The resulting data is often quite disparate, stored in separate locations, and not contextually related. Collaborative Molecular Modeling Environment (C-ME) is an interactive community-based collaboration system that allows researchers to organize information, visualize data on a two-dimensional (2-D) or three-dimensional (3-D) basis, and share and manage that information with collaborators in real time. C-ME stores the information in industry-standard databases that are immediately accessible by appropriate permission within the computer network directory service or anonymously across the internet through the C-ME application or through a web browser. The system addresses two important aspects of collaboration: context and information management. C-ME allows a researcher to use a 3-D atomic structure model or a 2-D image as a contextual basis on which to attach and share annotations to specific atoms or molecules or to specific regions of a 2-D image. These annotations provide additional information about the atomic structure or image data that can then be evaluated, amended or added to by other project members. PMID:18286178

  10. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    NASA Astrophysics Data System (ADS)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H.; Meeks, Sanford L.; Kupelian, Patrick A.

    2010-09-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  11. Towards real-time 3D US-CT registration on the beating heart for guidance of minimally invasive cardiac interventions

    NASA Astrophysics Data System (ADS)

    Li, Feng; Lang, Pencilla; Rajchl, Martin; Chen, Elvis C. S.; Guiraudon, Gerard; Peters, Terry M.

    2012-02-01

    Compared to conventional open-heart surgeries, minimally invasive cardiac interventions cause less trauma and sideeffects to patients. However, the direct view of surgical targets and tools is usually not available in minimally invasive procedures, which makes image-guided navigation systems essential. The choice of imaging modalities used in the navigation systems must consider the capability of imaging soft tissues, spatial and temporal resolution, compatibility and flexibility in the OR, and financial cost. In this paper, we propose a new means of guidance for minimally invasive cardiac interventions using 3D real-time ultrasound images to show the intra-operative heart motion together with preoperative CT image(s) employed to demonstrate high-quality 3D anatomical context. We also develop a method to register intra-operative ultrasound and pre-operative CT images in close to real-time. The registration method has two stages. In the first, anatomical features are segmented from the first frame of ultrasound images and the CT image(s). A feature based registration is used to align those features. The result of this is used as an initialization in the second stage, in which a mutual information based registration is used to register every ultrasound frame to the CT image(s). A GPU based implementation is used to accelerate the registration.

  12. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  13. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  14. [Measurement of left atrial and ventricular volumes in real-time 3D echocardiography. Validation by nuclear magnetic resonance

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Shiota, T.; Qin, J. X.; White, R. D.; Thomas, J. D.

    2001-01-01

    The measurement of the left ventricular ejection fraction is important for the evaluation of cardiomyopathy and depends on the measurement of left ventricular volumes. There are no existing conventional echocardiographic means of measuring the true left atrial and ventricular volumes without mathematical approximations. The aim of this study was to test anew real time 3-dimensional echocardiographic system of calculating left atrial and ventricular volumes in 40 patients after in vitro validation. The volumes of the left atrium and ventricle acquired from real time 3-D echocardiography in the apical view, were calculated in 7 sections parallel to the surface of the probe and compared with atrial (10 patients) and ventricular (30 patients) volumes calculated by nuclear magnetic resonance with the simpson method and with volumes of water in balloons placed in a cistern. Linear regression analysis showed an excellent correlation between the real volume of water in the balloons and volumes given in real time 3-dimensional echocardiography (y = 0.94x + 5.5, r = 0.99, p < 0.001, D = -10 +/- 4.5 ml). A good correlation was observed between real time 3-dimensional echocardiography and nuclear magnetic resonance for the measurement of left atrial and ventricular volumes (y = 0.95x - 10, r = 0.91, p < 0.001, D = -14.8 +/- 19.5 ml and y = 0.87x + 10, r = 0.98, P < 0.001, D = -8.3 +/- 18.7 ml, respectively. The authors conclude that real time three-dimensional echocardiography allows accurate measurement of left heart volumes underlying the clinical potential of this new 3-D method.

  15. 3D gesture recognition from serial range image

    NASA Astrophysics Data System (ADS)

    Matsui, Yasuyuki; Miyasaka, Takeo; Hirose, Makoto; Araki, Kazuo

    2001-10-01

    In this research, the recognition of gesture in 3D space is examined by using serial range images obtained by a real-time 3D measurement system developed in our laboratory. Using this system, it is possible to obtain time sequences of range, intensity and color data for a moving object in real-time without assigning markers to the targets. At first, gestures are tracked in 2D space by calculating 2D flow vectors at each points using an ordinal optical flow estimation method, based on time sequences of the intensity data. Then, location of each point after 2D movement is detected on the x-y plane using thus obtained 2D flow vectors. Depth information of each point after movement is then obtained from the range data and 3D flow vectors are assigned to each point. Time sequences of thus obtained 3D flow vectors allow us to track the 3D movement of the target. So, based on time sequences of 3D flow vectors of the targets, it is possible to classify the movement of the targets using continuous DP matching technique. This tracking of 3D movement using time sequences of 3D flow vectors may be applicable for a robust gesture recognition system.

  16. Real-time self-calibration of a tracked augmented reality display

    NASA Astrophysics Data System (ADS)

    Baum, Zachary; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2016-03-01

    PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system. METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit. RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections. CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.

  17. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    PubMed Central

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  18. Real-time target tracking for a 360-degree panoramic IR imager

    NASA Astrophysics Data System (ADS)

    Olson, C. C.; Waterman, J. R.; Nichols, J. M.

    2014-06-01

    We introduce a detection and tracking algorithm for panoramic imaging systems intended for operations in high-clutter environments. The algorithm combines correlation- and model-based tracking in a manner that is robust to occluding objects but without the need for a separate collision prediction module. Large data rates associated with the panoramic imager necessitate the use of parallel computation on graphics processing units. We discuss the queuing and tracking algorithms as well as practical considerations required for real-time implementation.

  19. Real-time tracking of objects for space applications using a laser range scanner

    NASA Technical Reports Server (NTRS)

    Blais, F.; Couvillon, R. A.; Rioux, M.; Maclean, S. G.

    1994-01-01

    Real-time tracking of multiple targets and three dimensional object features was demonstrated using a laser range scanner. The prototype was immune to ambient illumination and sun interference. Tracking error feedback was simultaneously obtained from individual targets, global predicted target position, and the human operator. A more complete study of calibration parameters and temperature variations on the scanner is needed to determine the exact performance of the sensor. Lissajous patterns used in three-dimensional real-time tracking prove helpful given their high resolution. The photogrammetry-based Advanced Space Vision System (ASVS) is discussed in combination with the laser range scanner.

  20. Creation of 3D digital anthropomorphic phantoms which model actual patient non-rigid body motion as determined from MRI and position tracking studies of volunteers

    NASA Astrophysics Data System (ADS)

    Connolly, C. M.; Konik, A.; Dasari, P. K. R.; Segars, P.; Zheng, S.; Johnson, K. L.; Dey, J.; King, M. A.

    2011-03-01

    Patient motion can cause artifacts, which can lead to difficulty in interpretation. The purpose of this study is to create 3D digital anthropomorphic phantoms which model the location of the structures of the chest and upper abdomen of human volunteers undergoing a series of clinically relevant motions. The 3D anatomy is modeled using the XCAT phantom and based on MRI studies. The NURBS surfaces of the XCAT are interactively adapted to fit the MRI studies. A detailed XCAT phantom is first developed from an EKG triggered Navigator acquisition composed of sagittal slices with a 3 x 3 x 3 mm voxel dimension. Rigid body motion states are then acquired at breath-hold as sagittal slices partially covering the thorax, centered on the heart, with 9 mm gaps between them. For non-rigid body motion requiring greater sampling, modified Navigator sequences covering the entire thorax with 3 mm gaps between slices are obtained. The structures of the initial XCAT are then adapted to fit these different motion states. Simultaneous to MRI imaging the positions of multiple reflective markers on stretchy bands about the volunteer's chest and abdomen are optically tracked in 3D via stereo imaging. These phantoms with combined position tracking will be used to investigate both imaging-data-driven and motion-tracking strategies to estimate and correct for patient motion. Our initial application will be to cardiacperfusion SPECT imaging where the XCAT phantoms will be used to create patient activity and attenuation distributions for each volunteer with corresponding motion tracking data from the markers on the body-surface. Monte Carlo methods will then be used to simulate SPECT acquisitions, which will be used to evaluate various motion estimation and correction strategies.

  1. Real-time object tracking for moving target auto-focus in digital camera

    NASA Astrophysics Data System (ADS)

    Guan, Haike; Niinami, Norikatsu; Liu, Tong

    2015-02-01

    Focusing at a moving object accurately is difficult and important to take photo of the target successfully in a digital camera. Because the object often moves randomly and changes its shape frequently, position and distance of the target should be estimated at real-time so as to focus at the objet precisely. We propose a new method of real-time object tracking to do auto-focus for moving target in digital camera. Video stream in the camera is used for the moving target tracking. Particle filter is used to deal with problem of the target object's random movement and shape change. Color and edge features are used as measurement of the object's states. Parallel processing algorithm is developed to realize real-time particle filter object tracking easily in hardware environment of the digital camera. Movement prediction algorithm is also proposed to remove focus error caused by difference between tracking result and target object's real position when the photo is taken. Simulation and experiment results in digital camera demonstrate effectiveness of the proposed method. We embedded real-time object tracking algorithm in the digital camera. Position and distance of the moving target is obtained accurately by object tracking from the video stream. SIMD processor is applied to enforce parallel real-time processing. Processing time less than 60ms for each frame is obtained in the digital camera with its CPU of only 162MHz.

  2. Experimental evaluations of the accuracy of 3D and 4D planning in robotic tracking stereotactic body radiotherapy for lung cancers

    SciTech Connect

    Chan, Mark K. H.; Kwong, Dora L. W.; Ng, Sherry C. Y.; Tong, Anthony S. M.; Tam, Eric K. W.

    2013-04-15

    Purpose: Due to the complexity of 4D target tracking radiotherapy, the accuracy of this treatment strategy should be experimentally validated against established standard 3D technique. This work compared the accuracy of 3D and 4D dose calculations in respiration tracking stereotactic body radiotherapy (SBRT). Methods: Using the 4D planning module of the CyberKnife treatment planning system, treatment plans for a moving target and a static off-target cord structure were created on different four-dimensional computed tomography (4D-CT) datasets of a thorax phantom moving in different ranges. The 4D planning system used B-splines deformable image registrations (DIR) to accumulate dose distributions calculated on different breathing geometries, each corresponding to a static 3D-CT image of the 4D-CT dataset, onto a reference image to compose a 4D dose distribution. For each motion, 4D optimization was performed to generate a 4D treatment plan of the moving target. For comparison with standard 3D planning, each 4D plan was copied to the reference end-exhale images and a standard 3D dose calculation was followed. Treatment plans of the off-target structure were first obtained by standard 3D optimization on the end-exhale images. Subsequently, they were applied to recalculate the 4D dose distributions using DIRs. All dose distributions that were initially obtained using the ray-tracing algorithm with equivalent path-length heterogeneity correction (3D{sub EPL} and 4D{sub EPL}) were recalculated by a Monte Carlo algorithm (3D{sub MC} and 4D{sub MC}) to further investigate the effects of dose calculation algorithms. The calculated 3D{sub EPL}, 3D{sub MC}, 4D{sub EPL}, and 4D{sub MC} dose distributions were compared to measurements by Gafchromic EBT2 films in the axial and coronal planes of the moving target object, and the coronal plane for the static off-target object based on the {gamma} metric at 5%/3mm criteria ({gamma}{sub 5%/3mm}). Treatment plans were considered

  3. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    NASA Astrophysics Data System (ADS)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  4. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  5. Longitudinal, label-free, quantitative tracking of cell death and viability in a 3D tumor model with OCT.

    PubMed

    Jung, Yookyung; Klein, Oliver J; Wang, Hequn; Evans, Conor L

    2016-01-01

    Three-dimensional in vitro tumor models are highly useful tools for studying tumor growth and treatment response of malignancies such as ovarian cancer. Existing viability and treatment assessment assays, however, face shortcomings when applied to these large, complex, and heterogeneous culture systems. Optical coherence tomography (OCT) is a noninvasive, label-free, optical imaging technique that can visualize live cells and tissues over time with subcellular resolution and millimeters of optical penetration depth. Here, we show that OCT is capable of carrying out high-content, longitudinal assays of 3D culture treatment response. We demonstrate the usage and capability of OCT for the dynamic monitoring of individual and combination therapeutic regimens in vitro, including both chemotherapy drugs and photodynamic therapy (PDT) for ovarian cancer. OCT was validated against the standard LIVE/DEAD Viability/Cytotoxicity Assay in small tumor spheroid cultures, showing excellent correlation with existing standards. Importantly, OCT was shown to be capable of evaluating 3D spheroid treatment response even when traditional viability assays failed. OCT 3D viability imaging revealed synergy between PDT and the standard-of-care chemotherapeutic carboplatin that evolved over time. We believe the efficacy and accuracy of OCT in vitro drug screening will greatly contribute to the field of cancer treatment and therapy evaluation. PMID:27248849

  6. Longitudinal, label-free, quantitative tracking of cell death and viability in a 3D tumor model with OCT

    PubMed Central

    Jung, Yookyung; Klein, Oliver J.; Wang, Hequn; Evans, Conor L.

    2016-01-01

    Three-dimensional in vitro tumor models are highly useful tools for studying tumor growth and treatment response of malignancies such as ovarian cancer. Existing viability and treatment assessment assays, however, face shortcomings when applied to these large, complex, and heterogeneous culture systems. Optical coherence tomography (OCT) is a noninvasive, label-free, optical imaging technique that can visualize live cells and tissues over time with subcellular resolution and millimeters of optical penetration depth. Here, we show that OCT is capable of carrying out high-content, longitudinal assays of 3D culture treatment response. We demonstrate the usage and capability of OCT for the dynamic monitoring of individual and combination therapeutic regimens in vitro, including both chemotherapy drugs and photodynamic therapy (PDT) for ovarian cancer. OCT was validated against the standard LIVE/DEAD Viability/Cytotoxicity Assay in small tumor spheroid cultures, showing excellent correlation with existing standards. Importantly, OCT was shown to be capable of evaluating 3D spheroid treatment response even when traditional viability assays failed. OCT 3D viability imaging revealed synergy between PDT and the standard-of-care chemotherapeutic carboplatin that evolved over time. We believe the efficacy and accuracy of OCT in vitro drug screening will greatly contribute to the field of cancer treatment and therapy evaluation. PMID:27248849

  7. Longitudinal, label-free, quantitative tracking of cell death and viability in a 3D tumor model with OCT

    NASA Astrophysics Data System (ADS)

    Jung, Yookyung; Klein, Oliver J.; Wang, Hequn; Evans, Conor L.

    2016-06-01

    Three-dimensional in vitro tumor models are highly useful tools for studying tumor growth and treatment response of malignancies such as ovarian cancer. Existing viability and treatment assessment assays, however, face shortcomings when applied to these large, complex, and heterogeneous culture systems. Optical coherence tomography (OCT) is a noninvasive, label-free, optical imaging technique that can visualize live cells and tissues over time with subcellular resolution and millimeters of optical penetration depth. Here, we show that OCT is capable of carrying out high-content, longitudinal assays of 3D culture treatment response. We demonstrate the usage and capability of OCT for the dynamic monitoring of individual and combination therapeutic regimens in vitro, including both chemotherapy drugs and photodynamic therapy (PDT) for ovarian cancer. OCT was validated against the standard LIVE/DEAD Viability/Cytotoxicity Assay in small tumor spheroid cultures, showing excellent correlation with existing standards. Importantly, OCT was shown to be capable of evaluating 3D spheroid treatment response even when traditional viability assays failed. OCT 3D viability imaging revealed synergy between PDT and the standard-of-care chemotherapeutic carboplatin that evolved over time. We believe the efficacy and accuracy of OCT in vitro drug screening will greatly contribute to the field of cancer treatment and therapy evaluation.

  8. Exploration of the potential of liquid scintillators for real-time 3D dosimetry of intensity modulated proton beams

    PubMed Central

    Beddar, Sam; Archambault, Louis; Sahoo, Narayan; Poenisch, Falk; Chen, George T.; Gillin, Michael T.; Mohan, Radhe

    2009-01-01

    In this study, the authors investigated the feasibility of using a 3D liquid scintillator (LS) detector system for the verification and characterization of proton beams in real time for intensity and energy-modulated proton therapy. A plastic tank filled with liquid scintillator was irradiated with pristine proton Bragg peaks. Scintillation light produced during the irradiation was measured with a CCD camera. Acquisition rates of 20 and 10 frames per second (fps) were used to image consecutive frame sequences. These measurements were then compared to ion chamber measurements and Monte Carlo simulations. The light distribution measured from the images acquired at rates of 20 and 10 fps have standard deviations of 1.1% and 0.7%, respectively, in the plateau region of the Bragg curve. Differences were seen between the raw LS signal and the ion chamber due to the quenching effects of the LS and due to the optical properties of the imaging system. The authors showed that this effect can be accounted for and corrected by Monte Carlo simulations. The liquid scintillator detector system has a good potential for performing fast proton beam verification and characterization. PMID:19544791

  9. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  10. Accuracy of a Mitral Valve Segmentation Method Using J-Splines for Real-Time 3D Echocardiography Data

    PubMed Central

    Siefert, Andrew W.; Icenogle, David A.; Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Rossignac, Jarek; Lerakis, Stamatios; Yoganathan, Ajit P.

    2013-01-01

    Patient-specific models of the heart’s mitral valve (MV) exhibit potential for surgical planning. While advances in 3D echocardiography (3DE) have provided adequate resolution to extract MV leaflet geometry, no study has quantitatively assessed the accuracy of their modeled leaflets versus a ground-truth standard for temporal frames beyond systolic closure or for differing valvular dysfunctions. The accuracy of a 3DE-based segmentation methodology based on J-splines was assessed for porcine MVs with known 4D leaflet coordinates within a pulsatile simulator during closure, peak closure, and opening for a control, prolapsed, and billowing MV model. For all time points, the mean distance error between the segmented models and ground-truth data were 0.40±0.32 mm, 0.52±0.51 mm, and 0.74±0.69 mm for the control, flail, and billowing models. For all models and temporal frames, 95% of the distance errors were below 1.64 mm. When applied to a patient data set, segmentation was able to confirm a regurgitant orifice and post-operative improvements in coaptation. This study provides an experimental platform for assessing the accuracy of an MV segmentation methodology at phases beyond systolic closure and for differing MV dysfunctions. Results demonstrate the accuracy of a MV segmentation methodology for the development of future surgical planning tools. PMID:23460042

  11. Tracking Accuracy of a Real-Time Fiducial Tracking System for Patient Positioning and Monitoring in Radiation Therapy

    SciTech Connect

    Shchory, Tal; Schifter, Dan; Lichtman, Rinat; Neustadter, David; Corn, Benjamin W.

    2010-11-15

    Purpose: In radiation therapy there is a need to accurately know the location of the target in real time. A novel radioactive tracking technology has been developed to answer this need. The technology consists of a radioactive implanted fiducial marker designed to minimize migration and a linac mounted tracking device. This study measured the static and dynamic accuracy of the new tracking technology in a clinical radiation therapy environment. Methods and Materials: The tracking device was installed on the linac gantry. The radioactive marker was located in a tissue equivalent phantom. Marker location was measured simultaneously by the radioactive tracking system and by a Microscribe G2 coordinate measuring machine (certified spatial accuracy of 0.38 mm). Localization consistency throughout a volume and absolute accuracy in the Fixed coordinate system were measured at multiple gantry angles over volumes of at least 10 cm in diameter centered at isocenter. Dynamic accuracy was measured with the marker located inside a breathing phantom. Results: The mean consistency for the static source was 0.58 mm throughout the tested region at all measured gantry angles. The mean absolute position error in the Fixed coordinate system for all gantry angles was 0.97 mm. The mean real-time tracking error for the dynamic source within the breathing phantom was less than 1 mm. Conclusions: This novel radioactive tracking technology has the potential to be useful in accurate target localization and real-time monitoring for radiation therapy.

  12. Real-time x-ray fluoroscopy-based catheter detection and tracking for cardiac electrophysiology interventions

    SciTech Connect

    Ma Yingliang; Housden, R. James; Razavi, Reza; Rhode, Kawal S.; Gogin, Nicolas; Cathier, Pascal; Gijsbers, Geert; Cooklin, Michael; O'Neill, Mark; Gill, Jaswinder; Rinaldi, C. Aldo

    2013-07-15

    Purpose: X-ray fluoroscopically guided cardiac electrophysiology (EP) procedures are commonly carried out to treat patients with arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of a three-dimensional (3D) roadmap derived from preprocedural volumetric images can be used to add anatomical information. It is useful to know the position of the catheter electrodes relative to the cardiac anatomy, for example, to record ablation therapy locations during atrial fibrillation therapy. Also, the electrode positions of the coronary sinus (CS) catheter or lasso catheter can be used for road map motion correction.Methods: In this paper, the authors present a novel unified computational framework for image-based catheter detection and tracking without any user interaction. The proposed framework includes fast blob detection, shape-constrained searching and model-based detection. In addition, catheter tracking methods were designed based on the customized catheter models input from the detection method. Three real-time detection and tracking methods are derived from the computational framework to detect or track the three most common types of catheters in EP procedures: the ablation catheter, the CS catheter, and the lasso catheter. Since the proposed methods use the same blob detection method to extract key information from x-ray images, the ablation, CS, and lasso catheters can be detected and tracked simultaneously in real-time.Results: The catheter detection methods were tested on 105 different clinical fluoroscopy sequences taken from 31 clinical procedures. Two-dimensional (2D) detection errors of 0.50 {+-} 0.29, 0.92 {+-} 0.61, and 0.63 {+-} 0.45 mm as well as success rates of 99.4%, 97.2%, and 88.9% were achieved for the CS catheter, ablation catheter, and lasso catheter, respectively. With the tracking method, accuracies were increased to 0.45 {+-} 0.28, 0.64 {+-} 0.37, and 0.53 {+-} 0.38 mm and success rates increased to 100%, 99

  13. Three different strategies for real-time prostate capsule volume computation from 3-D end-fire transrectal ultrasound.

    PubMed

    Barqawi, Albaha B; Lu, Li; Crawford, E David; Fenster, Aaron; Werahera, Priya N; Kumar, Dinesh; Miller, Steve; Suri, Jasjit S

    2007-01-01

    estimation of prostate capsule volume via segmentation of the prostate from 3-D ultrasound volumetric ultrasound images is a valuable clinical tool, especially during biopsy. Normally, a physician traces the boundaries of the prostate manually, but this process is tedious, laborious, and subject to errors. The prostate capsule edge is computed using three different strategies: (a) least square approach, (b) level set approach, and (c) Discrete Dynamic Contour approach. (a) In the least square method, edge points are defined by searching for the optimal edge based on the average signal characteristics. These edge points constitute an initial curve which is later refined; (b) Level set approach. The images are modeled as piece-wise constant, and the energy functional is defined and minimized. This method is also automated; and (c) The Discrete Dynamic Contour (DDC). A trained user selects several points in the first image and an initial contour is obtained by a model based initialization. Based on this initialization condition, the contour is deformed automatically to better fit the image. This method is semi-automatic. The three methods were tested on database consisting of 15 prostate phantom volumes acquired using a Philips ultrasound machine using an end-fire TRUS. The ground truth (GT) is developed by tracing the boundary of prostate on a slice-by-slice basis. The mean volumes using the least square, level set and DDC techniques were 15.84 cc, 15.55 cc and 16.33 cc, respectively. We validated the methods by calculating the volume with GT and we got an average volume of 15. PMID:18002081

  14. A Distributed GPU-Based Framework for Real-Time 3D Volume Rendering of Large Astronomical Data Cubes

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-05-01

    We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 nodes with a total of 128GPUs. The framework proved to be scalable to render a 204GB data cube with an average of 30 frames per second. Our performance analyses also compare the use of NVIDIA Tesla 1060 and 2050GPU architectures and the effect of increasing the visualization output resolution on the rendering performance. Although our initial focus, as shown in the examples presented in this work, is volume rendering of spectral data cubes from radio astronomy, we contend that our approach has applicability to other disciplines where close to real-time volume rendering of terabyte-order three-dimensional data sets is a requirement.

  15. Tracking time interval changes of pulmonary nodules on follow-up 3D CT images via image-based risk score of lung cancer

    NASA Astrophysics Data System (ADS)

    Kawata, Y.; Niki, N.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2013-03-01

    In this paper, we present a computer-aided follow-up (CAF) scheme to support physicians to track interval changes of pulmonary nodules on three dimensional (3D) CT images and to decide the treatment strategies without making any under or over treatment. Our scheme involves analyzing CT histograms to evaluate the volumetric distribution of CT values within pulmonary nodules. A variational Bayesian mixture modeling framework translates the image-derived features into an image-based risk score for predicting the patient recurrence-free survival. Through applying our scheme to follow-up 3D CT images of pulmonary nodules, we demonstrate the potential usefulness of the CAF scheme which can provide the trajectories that can characterize time interval changes of pulmonary nodules.

  16. Real-time edge tracking using a tactile sensor

    NASA Technical Reports Server (NTRS)

    Berger, Alan D.; Volpe, Richard; Khosla, Pradeep K.

    1989-01-01

    Object recognition through the use of input from multiple sensors is an important aspect of an autonomous manipulation system. In tactile object recognition, it is necessary to determine the location and orientation of object edges and surfaces. A controller is proposed that utilizes a tactile sensor in the feedback loop of a manipulator to track along edges. In the control system, the data from the tactile sensor is first processed to find edges. The parameters of these edges are then used to generate a control signal to a hybrid controller. Theory is presented for tactile edge detection and an edge tracking controller. In addition, experimental verification of the edge tracking controller is presented.

  17. Tracking naturally occurring indoor features in 2-D and 3-D with lidar range/amplitude data

    SciTech Connect

    Adams, M.D.; Kerstens, A.

    1998-09-01

    Sensor-data processing for the interpretation of a mobile robot`s indoor environment, and the manipulation of this data for reliable localization, are still some of the most important issues in robotics. This article presents algorithms that determine the true position of a mobile robot, based on real 2-D and 3-D optical range and intensity data. The authors start with the physics of the particular type of sensor used, so that the extraction of reliable and repeatable information (namely, edge coordinates) can be determined, taking into account the noise associated with each range sample and the possibility of optical multiple-path effects. Again, applying the physical model of the sensor, the estimated positions of the mobile robot and the uncertainty in these positions are determined. They demonstrate real experiments using 2-D and 3-D scan data taken in indoor environments. To update the robot`s position reliably, the authors address the problem of matching the information recorded in a scan to, first, an a priori map, and second, to information recorded in previous scans, eliminating the need for an a priori map.

  18. Real-time implementation of zoom tracking on TI DM processor

    NASA Astrophysics Data System (ADS)

    Peddigari, Venkat R.; Kehtarnavaz, Nasser; Lee, Sang-Yong; Cook, G.

    2005-02-01

    Zoom tracking involves the automatic adjustment of the in-focus motor position of a digital camera to maintain focus when the zoom lens is moved. This paper discusses and compares the real-time implementation of two widely used zoom tracking algorithms, namely geometric zoom tracking (GZT) and adaptive zoom tracking (AZT), on the Texas Instruments (TI) digital media (DM) processor. DM is a highly integrated, programmable dual-core processor manufactured by TI specifically for the digital still camera market. Extensive testing was carried out on the DM platform to obtain the performance of these algorithms in terms of zoom tracking accuracy and speed. The results show that AZT generates a better tracking accuracy while GZT provides a faster tracking speed.

  19. Nanoelectronic three-dimensional (3D) nanotip sensing array for real-time, sensitive, label-free sequence specific detection of nucleic acids.

    PubMed

    Esfandyarpour, Rahim; Yang, Lu; Koochak, Zahra; Harris, James S; Davis, Ronald W

    2016-02-01

    The improvements in our ability to sequence and genotype DNA have opened up numerous avenues in the understanding of human biology and medicine with various applications, especially in medical diagnostics. But the realization of a label free, real time, high-throughput and low cost biosensing platforms to detect molecular interactions with a high level of sensitivity has been yet stunted due to two factors: one, slow binding kinetics caused by the lack of probe molecules on the sensors and two, limited mass transport due to the planar structure (two-dimensional) of the current biosensors. Here we present a novel three-dimensional (3D), highly sensitive, real-time, inexpensive and label-free nanotip array as a rapid and direct platform to sequence-specific DNA screening. Our nanotip sensors are designed to have a nano sized thin film as their sensing area (~ 20 nm), sandwiched between two sensing electrodes. The tip is then conjugated to a DNA oligonucleotide complementary to the sequence of interest, which is electrochemically detected in real-time via impedance changes upon the formation of a double-stranded helix at the sensor interface. This 3D configuration is specifically designed to improve the biomolecular hit rate and the detection speed. We demonstrate that our nanotip array effectively detects oligonucleotides in a sequence-specific and highly sensitive manner, yielding concentration-dependent impedance change measurements with a target concentration as low as 10 pM and discrimination against even a single mismatch. Notably, our nanotip sensors achieve this accurate, sensitive detection without relying on signal indicators or enhancing molecules like fluorophores. It can also easily be scaled for highly multiplxed detection with up to 5000 sensors/square centimeter, and integrated into microfluidic devices. The versatile, rapid, and sensitive performance of the nanotip array makes it an excellent candidate for point-of-care diagnostics, and high

  20. Nanoelectronic three-dimensional (3D) nanotip sensing array for real-time, sensitive, label-free sequence specific detection of nucleic acids

    PubMed Central

    Yang, Lu; koochak, Zahra; Harris, James S.; Davis, Ronald W.

    2016-01-01

    The improvements in our ability to sequence and genotype DNA have opened up numerous avenues in the understanding of human biology and medicine with various applications, especially in medical diagnostics. But the realization of a label free, real time, high-throughput and low cost biosensing platforms to detect molecular interactions with a high level of sensitivity has been yet stunted due to two factors: one, slow binding kinetics caused by the lack of probe molecules on the sensors and two, limited mass transport due to the planar structure (two-dimensional) of the current biosensors. Here we present a novel three-dimensional (3D), highly sensitive, real-time, inexpensive and label-free nanotip array as a rapid and direct platform to sequence-specific DNA screening. Our nanotip sensors are designed to have a nano sized thin film as their sensing area (~ 20 nm), sandwiched between two sensing electrodes. The tip is then conjugated to a DNA oligonucleotide complementary to the sequence of interest, which is electrochemically detected in real-time via impedance changes upon the formation of a double-stranded helix at the sensor interface. This 3D configuration is specifically designed to improve the biomolecular hit rate and the detection speed. We demonstrate that our nanotip array effectively detects oligonucleotides in a sequence-specific and highly sensitive manner, yielding concentration-dependent impedance change measurements with a target concentration as low as 10 pM and discrimination against even a single mismatch. Notably, our nanotip sensors achieve this accurate, sensitive detection without relying on signal indicators or enhancing molecules like fluorophores. It can also easily be scaled for highly multiplxed detection with up to 5000 sensors/square centimeter, and integrated into microfluidic devices. The versatile, rapid, and sensitive performance of the nanotip array makes it an excellent candidate for point-of-care diagnostics, and high

  1. Applications of 3D hydrodynamic and particle tracking models in the San Francisco bay-delta estuary

    USGS Publications Warehouse

    Smith, P.E.; Donovan, J.M.; Wong, H.F.N.

    2005-01-01

    Three applications of three-dimensional hydrodynamic and particle-tracking models are currently underway by the United States Geological Survey in the San Francisco Bay-Delta Estuary. The first application is to the San Francisco Bay and a portion of the coastal ocean. The second application is to an important, gated control channel called the Delta Cross Channel, located within the northern portion of the Sacramento-San Joaquin River Delta. The third application is to a reach of the San Joaquin River near Stockton, California where a significant dissolved oxygen problem exists due, in part, to conditions associated with the deep-water ship channel for the Port of Stockton, California. This paper briefly discusses the hydrodynamic and particle tracking models being used and the three applications. Copyright ASCE 2005.

  2. NASA's "Eyes On The Solar System:" A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    NASA Astrophysics Data System (ADS)

    Hussey, K.

    2014-12-01

    NASA's Jet Propulsion Laboratory is using video game technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that can run on-line or as a stand-alone "video game," is of particular interest to educators looking for inviting tools to capture students interest in a format they like and understand. (eyes.nasa.gov). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft, planetary bodies and NASA/ESA missions in action. Key scientific results illustrated with video presentations, supporting imagery and web links are imbedded contextually into the solar system. Educators who want an interactive, game-based approach to engage students in learning Planetary Science will see how "Eyes" can be effectively used to teach its principles to grades 3 through 14.The presentation will include a detailed demonstration of the software along with a description/demonstration of how this technology is being adapted for education. There will also be a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," and "Eyes on Exoplanets," which can be viewed at eyes.nasa.gov/earth and eyes.nasa.gov/exoplanets.

  3. Real-Time High Resolution 3D Imaging of the Lyme Disease Spirochete Adhering to and Escaping from the Vasculature of a Living Host

    PubMed Central

    Colarusso, Pina; Bankhead, Troy; Kubes, Paul; Chaconas, George

    2008-01-01

    Pathogenic spirochetes are bacteria that cause a number of emerging and re-emerging diseases worldwide, including syphilis, leptospirosis, relapsing fever, and Lyme borreliosis. They navigate efficiently through dense extracellular matrix and cross the blood–brain barrier by unknown mechanisms. Due to their slender morphology, spirochetes are difficult to visualize by standard light microscopy, impeding studies of their behavior in situ. We engineered a fluorescent infectious strain of Borrelia burgdorferi, the Lyme disease pathogen, which expressed green fluorescent protein (GFP). Real-time 3D and 4D quantitative analysis of fluorescent spirochete dissemination from the microvasculature of living mice at high resolution revealed that dissemination was a multi-stage process that included transient tethering-type associations, short-term dragging interactions, and stationary adhesion. Stationary adhesions and extravasating spirochetes were most commonly observed at endothelial junctions, and translational motility of spirochetes appeared to play an integral role in transendothelial migration. To our knowledge, this is the first report of high resolution 3D and 4D visualization of dissemination of a bacterial pathogen in a living mammalian host, and provides the first direct insight into spirochete dissemination in vivo. PMID:18566656

  4. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    NASA Astrophysics Data System (ADS)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  5. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  6. Incorporating system latency associated with real-time target tracking radiotherapy in the dose prediction step

    NASA Astrophysics Data System (ADS)

    Roland, Teboh; Mavroidis, Panayiotis; Shi, Chengyu; Papanikolaou, Nikos

    2010-05-01

    System latency introduces geometric errors in the course of real-time target tracking radiotherapy. This effect can be minimized, for example by the use of predictive filters, but cannot be completely avoided. In this work, we present a convolution technique that can incorporate the effect as part of the treatment planning process. The method can be applied independently or in conjunction with the predictive filters to compensate for residual latency effects. The implementation was performed on TrackBeam (Initia Ltd, Israel), a prototype real-time target tracking system assembled and evaluated at our Cancer Institute. For the experimental system settings examined, a Gaussian distribution attributable to the TrackBeam latency was derived with σ = 3.7 mm. The TrackBeam latency, expressed as an average response time, was deduced to be 172 ms. Phantom investigations were further performed to verify the convolution technique. In addition, patient studies involving 4DCT volumes of previously treated lung cancer patients were performed to incorporate the latency effect in the dose prediction step. This also enabled us to effectively quantify the dosimetric and radiobiological impact of the TrackBeam and other higher latency effects on the clinical outcome of a real-time target tracking delivery.

  7. 3D tracking of single nanoparticles and quantum dots in living cells by out-of-focus imaging with diffraction pattern recognition

    NASA Astrophysics Data System (ADS)

    Gardini, Lucia; Capitanio, Marco; Pavone, Francesco S.

    2015-11-01

    Live cells are three-dimensional environments where biological molecules move to find their targets and accomplish their functions. However, up to now, most single molecule investigations have been limited to bi-dimensional studies owing to the complexity of 3d-tracking techniques. Here, we present a novel method for three-dimensional localization of single nano-emitters based on automatic recognition of out-of-focus diffraction patterns. Our technique can be applied to track the movements of single molecules in living cells using a conventional epifluorescence microscope. We first demonstrate three-dimensional localization of fluorescent nanobeads over 4 microns depth with accuracy below 2 nm in vitro. Remarkably, we also establish three-dimensional tracking of Quantum Dots, overcoming their anisotropic emission, by adopting a ligation strategy that allows rotational freedom of the emitter combined with proper pattern recognition. We localize commercially available Quantum Dots in living cells with accuracy better than 7 nm over 2 microns depth. We validate our technique by tracking the three-dimensional movements of single protein-conjugated Quantum Dots in living cell. Moreover, we find that important localization errors can occur in off-focus imaging when improperly calibrated and we give indications to avoid them. Finally, we share a Matlab script that allows readily application of our technique by other laboratories.

  8. 3D tracking of single nanoparticles and quantum dots in living cells by out-of-focus imaging with diffraction pattern recognition

    PubMed Central

    Gardini, Lucia; Capitanio, Marco; Pavone, Francesco S.

    2015-01-01

    Live cells are three-dimensional environments where biological molecules move to find their targets and accomplish their functions. However, up to now, most single molecule investigations have been limited to bi-dimensional studies owing to the complexity of 3d-tracking techniques. Here, we present a novel method for three-dimensional localization of single nano-emitters based on automatic recognition of out-of-focus diffraction patterns. Our technique can be applied to track the movements of single molecules in living cells using a conventional epifluorescence microscope. We first demonstrate three-dimensional localization of fluorescent nanobeads over 4 microns depth with accuracy below 2 nm in vitro. Remarkably, we also establish three-dimensional tracking of Quantum Dots, overcoming their anisotropic emission, by adopting a ligation strategy that allows rotational freedom of the emitter combined with proper pattern recognition. We localize commercially available Quantum Dots in living cells with accuracy better than 7 nm over 2 microns depth. We validate our technique by tracking the three-dimensional movements of single protein-conjugated Quantum Dots in living cell. Moreover, we find that important localization errors can occur in off-focus imaging when improperly calibrated and we give indications to avoid them. Finally, we share a Matlab script that allows readily application of our technique by other laboratories. PMID:26526410

  9. 3D tracking of single nanoparticles and quantum dots in living cells by out-of-focus imaging with diffraction pattern recognition.

    PubMed

    Gardini, Lucia; Capitanio, Marco; Pavone, Francesco S

    2015-01-01

    Live cells are three-dimensional environments where biological molecules move to find their targets and accomplish their functions. However, up to now, most single molecule investigations have been limited to bi-dimensional studies owing to the complexity of 3d-tracking techniques. Here, we present a novel method for three-dimensional localization of single nano-emitters based on automatic recognition of out-of-focus diffraction patterns. Our technique can be applied to track the movements of single molecules in living cells using a conventional epifluorescence microscope. We first demonstrate three-dimensional localization of fluorescent nanobeads over 4 microns depth with accuracy below 2 nm in vitro. Remarkably, we also establish three-dimensional tracking of Quantum Dots, overcoming their anisotropic emission, by adopting a ligation strategy that allows rotational freedom of the emitter combined with proper pattern recognition. We localize commercially available Quantum Dots in living cells with accuracy better than 7 nm over 2 microns depth. We validate our technique by tracking the three-dimensional movements of single protein-conjugated Quantum Dots in living cell. Moreover, we find that important localization errors can occur in off-focus imaging when improperly calibrated and we give indications to avoid them. Finally, we share a Matlab script that allows readily application of our technique by other laboratories. PMID:26526410

  10. Innovative radar products for the 3D, high-resolution and real-time monitoring of the convective activity in the airspace around airports

    NASA Astrophysics Data System (ADS)

    Tabary, P.; Bousquet, O.; Sénési, S.; Josse, P.

    2009-09-01

    Airports are recognized to become critical areas in the future given the expected doubling in air traffic by 2020. The increased density of aircrafts in the airport airspaces calls for improved systems and products to monitor in real-time potential hazards and thus meet the airport objectives in terms of safety and throughput. Among all meteorological hazards, convection is certainly the most impacting one. We describe here some innovative radar products that have recently been developed and tested at Météo France around the Paris airports. Those products rely on the French Doppler radar network consisting today of 24 elements with some of them being polarimetric. Reflectivity and Doppler volumetric data are concentrated from all 24 radar sites in real-time at the central level (Toulouse) where 3D Cartesian mosaics covering the entire French territory (i.e. a typical 1,000 by 1,000 km² area) are elaborated. The innovation with respect to what has been done previously is that the three components of the wind are retrieved by operational combination of the radial velocities. The final product, available in real-time every 15 minutes with a spatial resolution of 2.5 km horizontally and 500 m vertically, is a 3D grid giving the interpolated reflectivity and wind field (u, v and w) values. The 2.5 km resolution, arising from the fact that the retrieval is carried out every 15 minutes from radars typically spaced apart by 150 km, is not sufficient for airport airspace monitoring but is valuable for en-route monitoring. Its extension to the entire European space is foreseen. To address the specific needs in the airport areas, a downscaling technique has been proposed to merge the above-mentioned low-resolution 3D wind and reflectivity fields with the high resolution (5 minutes and 1 km²) 2D imagery of the Trappes radar that is the one that covers the Paris airports. The merging approach is based on the assumption that the Vertical Profile of Reflectivity (i.e. the

  11. A low cost real-time motion tracking approach using webcam technology

    PubMed Central

    Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh

    2014-01-01

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306

  12. A low cost real-time motion tracking approach using webcam technology.

    PubMed

    Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh

    2015-02-01

    Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306

  13. Live imaging and quantitative analysis of gastrulation in mouse embryos using light-sheet microscopy and 3D tracking tools.

    PubMed

    Ichikawa, Takehiko; Nakazato, Kenichi; Keller, Philipp J; Kajiura-Kobayashi, Hiroko; Stelzer, Ernst H K; Mochizuki, Atsushi; Nonaka, Shigenori

    2014-03-01

    This protocol describes how to observe gastrulation in living mouse embryos by using light-sheet microscopy and computational tools to analyze the resulting image data at the single-cell level. We describe a series of techniques needed to image the embryos under physiological conditions, including how to hold mouse embryos without agarose embedding, how to transfer embryos without air exposure and how to construct environmental chambers for live imaging by digital scanned light-sheet microscopy (DSLM). Computational tools include manual and semiautomatic tracking programs that are developed for analyzing the large 4D data sets acquired with this system. Note that this protocol does not include details of how to build the light-sheet microscope itself. Time-lapse imaging ends within 12 h, with subsequent tracking analysis requiring 3-6 d. Other than some mouse-handling skills, this protocol requires no advanced skills or knowledge. Light-sheet microscopes are becoming more widely available, and thus the techniques outlined in this paper should be helpful for investigating mouse embryogenesis. PMID:24525751

  14. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  15. Real-time acquisition and tracking system with multiple Kalman filters

    NASA Astrophysics Data System (ADS)

    Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.

    1994-07-01

    The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.

  16. 3D-localization microscopy and tracking of FoF1-ATP synthases in living bacteria

    NASA Astrophysics Data System (ADS)

    Renz, Anja; Renz, Marc; Klütsch, Diana; Deckers-Hebestreit, Gabriele; Börsch, Michael

    2015-03-01

    FoF1-ATP synthases are membrane-embedded protein machines that catalyze the synthesis of adenosine triphosphate. Using photoactivation-based localization microscopy (PALM) in TIR-illumination as well as structured illumination microscopy (SIM), we explore the spatial distribution and track single FoF1-ATP synthases in living E. coli cells under physiological conditions at different temperatures. For quantitative diffusion analysis by mean-squared-displacement measurements, the limited size of the observation area in the membrane with its significant membrane curvature has to be considered. Therefore, we applied a 'sliding observation window' approach (M. Renz et al., Proc. SPIE 8225, 2012) and obtained the one-dimensional diffusion coefficient of FoF1-ATP synthase diffusing on the long axis in living E. coli cells.

  17. Development of the 3-D Track Imager for Medium and High-Energy Gamma-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.

    2006-01-01

    The Advanced Compton Telescope (ACT) and Advanced Pair Telescope (APT) are envisioned as the next medium (0.3 ^ 50 MeV) and high-energy (30 MeV - greater than 100 GeV) gamma-ray missions. These missions will address many research focus areas of the Structure and Evolution of the Universe Roadmap. These areas include: element formation, matter, energy, & magnetic field interactions in galaxies, AGN & GRB emission, and behavior of matter in extreme environments of black holes & pulsars. Achieving these science goals requires a substantial increases in telescope sensitivity and angular resolution. This talk will discuss how these goals can be met with the three-dimensional track imager (3-DTI), a large volume, low density, time projection chamber with two-dimensional micro-well detector readout and report on our development of a 10 cm x 10 cm x 30 prototype instrument.

  18. Real-Time Visual Tracking through Fusion Features.

    PubMed

    Ruan, Yang; Wei, Zhenzhong

    2016-01-01

    Due to their high-speed, correlation filters for object tracking have begun to receive increasing attention. Traditional object trackers based on correlation filters typically use a single type of feature. In this paper, we attempt to integrate multiple feature types to improve the performance, and we propose a new DD-HOG fusion feature that consists of discriminative descriptors (DDs) and histograms of oriented gradients (HOG). However, fusion features as multi-vector descriptors cannot be directly used in prior correlation filters. To overcome this difficulty, we propose a multi-vector correlation filter (MVCF) that can directly convolve with a multi-vector descriptor to obtain a single-channel response that indicates the location of an object. Experiments on the CVPR2013 tracking benchmark with the evaluation of state-of-the-art trackers show the effectiveness and speed of the proposed method. Moreover, we show that our MVCF tracker, which uses the DD-HOG descriptor, outperforms the structure-preserving object tracker (SPOT) in multi-object tracking because of its high-speed and ability to address heavy occlusion. PMID:27347951

  19. Real-Time Visual Tracking through Fusion Features

    PubMed Central

    Ruan, Yang; Wei, Zhenzhong

    2016-01-01

    Due to their high-speed, correlation filters for object tracking have begun to receive increasing attention. Traditional object trackers based on correlation filters typically use a single type of feature. In this paper, we attempt to integrate multiple feature types to improve the performance, and we propose a new DD-HOG fusion feature that consists of discriminative descriptors (DDs) and histograms of oriented gradients (HOG). However, fusion features as multi-vector descriptors cannot be directly used in prior correlation filters. To overcome this difficulty, we propose a multi-vector correlation filter (MVCF) that can directly convolve with a multi-vector descriptor to obtain a single-channel response that indicates the location of an object. Experiments on the CVPR2013 tracking benchmark with the evaluation of state-of-the-art trackers show the effectiveness and speed of the proposed method. Moreover, we show that our MVCF tracker, which uses the DD-HOG descriptor, outperforms the structure-preserving object tracker (SPOT) in multi-object tracking because of its high-speed and ability to address heavy occlusion. PMID:27347951

  20. Experimental investigation of a general real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring

    NASA Astrophysics Data System (ADS)

    Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J.

    2012-11-01

    The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(ti) and the projected marker positions p(xp, yp; ti) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(xp, yp; ti) - P(θi) · (aR(ti) + bR(ti - τ) + c)‖2 with the projection operator P(θi). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been

  1. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-01

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays. PMID:25836082

  2. Three-dimensional particle tracking method using FPGA-based real-time image processing and four-view image splitter

    NASA Astrophysics Data System (ADS)

    Kreizer, Mark; Liberzon, Alex

    2011-03-01

    We present a cost-effective solution of the three-dimensional particle tracking velocimetry (3D-PTV) system based on the real-time image processing method (Kreizer et al. Exp Fluids 48:105-110, 2010) and a four-view image splitter. The image processing algorithm, based on the intensity threshold and intensity gradients estimated using the fixed-size Sobel kernel, is implemented on the field-programmable gate array integrated into the camera electronics. It enables extracting positions of tracked objects, such as tracers or large particles, in real time. The second major component of this system is a four-view split-screen device that provides four views of the observation volume from different angles. An open-source ray-tracing software package allows for a customized optical setup for the given experimental settings of working distances and camera parameters. The specific design enables tracking in larger observation volumes when compared to the designs published up to date. The present cost-effective solution is complemented with open-source particle tracking software that receives raw data acquired by the real-time image processing system and returns trajectories of the identified particles. The combination of these components simplifies the 3D-PTV technique by reducing the size and increasing recording speed and storage capabilities. The system is capable to track a multitude of particles at high speed and stream the data over the computer network. The system can provide a solution for the remotely controlled tracking experiments, such as in microgravity, underwater or in applications with harsh experimental conditions.

  3. Three‐Dimensional Echocardiography and 2D‐3D Speckle‐Tracking Imaging in Chronic Pulmonary Hypertension: Diagnostic Accuracy in Detecting Hemodynamic Signs of Right Ventricular (RV) Failure

    PubMed Central

    Vitarelli, Antonio; Mangieri, Enrico; Terzano, Claudio; Gaudio, Carlo; Salsano, Felice; Rosato, Edoardo; Capotosto, Lidia; D'Orazio, Simona; Azzano, Alessia; Truscelli, Giovanni; Cocco, Nino; Ashurov, Rasul

    2015-01-01

    Background Our aim was to compare three‐dimensional (3D) and 2D and 3D speckle‐tracking (2D‐STE, 3D‐STE) echocardiographic parameters with conventional right ventricular (RV) indexes in patients with chronic pulmonary hypertension (PH), and investigate whether these techniques could result in better correlation with hemodynamic variables indicative of heart failure. Methods and Results Seventy‐three adult patients (mean age, 53±13 years; 44% male) with chronic PH of different etiologies were studied by echocardiography and cardiac catheterization (25 precapillary PH from pulmonary arterial hypertension, 23 obstructive pulmonary heart disease, and 23 postcapillary PH from mitral regurgitation). Thirty healthy subjects (mean age, 54±15 years; 43% male) served as controls. Standard 2D measurements (RV–fractional area change–tricuspid annular plane systolic excursion) and mitral and tricuspid tissue Doppler annular velocities were obtained. RV 3D volumes and global and regional ejection fraction (3D‐RVEF) were determined. RV strains were calculated by 2D‐STE and 3D‐STE. RV 3D global‐free‐wall longitudinal strain (3DGFW‐RVLS), 2D global‐free‐wall longitudinal strain (GFW‐RVLS), apical‐free‐wall longitudinal strain, basal‐free‐wall longitudinal strain, and 3D‐RVEF were lower in patients with precapillary PH (P<0.0001) and postcapillary PH (P<0.01) compared to controls. 3DGFW‐RVLS (hazard ratio 4.6, 95% CI 2.79 to 8.38, P=0.004) and 3D‐RVEF (hazard ratio 5.3, 95% CI 2.85 to 9.89, P=0.002) were independent predictors of mortality. Receiver operating characteristic curves showed that the thresholds offering an adequate compromise between sensitivity and specificity for detecting hemodynamic signs of RV failure were 39% for 3D‐RVEF (AUC 0.89), −17% for 3DGFW‐RVLS (AUC 0.88), −18% for GFW‐RVLS (AUC 0.88), −16% for apical‐free‐wall longitudinal strain (AUC 0.85), 16 mm for tricuspid annular plane systolic

  4. Real-time object tracking via online discriminative feature selection.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2013-12-01

    Most tracking-by-detection algorithms train discriminative classifiers to separate target objects from their surrounding background. In this setting, noisy samples are likely to be included when they are not properly sampled, thereby causing visual drift. The multiple instance learning (MIL) paradigm has been recently applied to alleviate this problem. However, important prior information of instance labels and the most correct positive instance (i.e., the tracking result in the current frame) can be exploited using a novel formulation much simpler than an MIL approach. In this paper, we show that integrating such prior information into a supervised learning algorithm can handle visual drift more effectively and efficiently than the existing MIL tracker. We present an online discriminative feature selection algorithm that optimizes the objective function in the steepest ascent direction with respect to the positive samples while in the steepest descent direction with respect to the negative ones. Therefore, the trained classifier directly couples its score with the importance of samples, leading to a more robust and efficient tracker. Numerous experimental evaluations with state-of-the-art algorithms on challenging sequences demonstrate the merits of the proposed algorithm. PMID:23955750

  5. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    PubMed Central

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  6. Real-time tracking of deformable objects based on combined matching-and-tracking

    NASA Astrophysics Data System (ADS)

    Yan, Junhua; Wang, Zhigang; Wang, Shunfei

    2016-03-01

    Visual tracking is very challenging due to the existence of several sources of variations, such as partial occlusion, deformation, scale variation, rotation, and background clutter. A model-free tracking method based on fusing accelerated features using fast explicit diffusion in nonlinear scale spaces (AKAZE) and KLT features is presented. First, matching-keypoints are generated by finding corresponding keypoints from the consecutive frames and the object template, then tracking-keypoints are generated using the forward-backward flow tracking method, and at last, credible keypoints are obtained by AKAZE-KLT tracking (AKT) algorithm. To avoid the instability of a statistical method, the median method is adopted to compute the object's location, scale, and rotation in each frame. The experimental results show that the AKT algorithm has strong robustness and can achieve accurate tracking especially under conditions of partial occlusion, scale variation, rotation, and deformation. The tracking performance shows higher robustness and accuracy in a variety of datasets and the average frame rate reaches 78 fps, showing good performance in real time.

  7. Real-time tracking of respiratory-induced tumor motion by dose-rate regulation

    NASA Astrophysics Data System (ADS)

    Han-Oh, Yeonju Sarah

    We have developed a novel real-time tumor-tracking technology, called Dose-Rate-Regulated Tracking (DRRT), to compensate for tumor motion caused by breathing. Unlike other previously proposed tumor-tracking methods, this new method uses a preprogrammed dynamic multileaf collimator (MLC) sequence in combination with real-time dose-rate control. This new scheme circumvents the technical challenge in MLC-based tumor tracking, that is to control the MLC motion in real time, based on real-time detected tumor motion. The preprogrammed MLC sequence describes the movement of the tumor, as a function of breathing phase, amplitude, or tidal volume. The irregularity of tumor motion during treatment is handled by real-time regulation of the dose rate, which effectively speeds up or slows down the delivery of radiation as needed. This method is based on the fact that all of the parameters in dynamic radiation delivery, including MLC motion, are enslaved to the cumulative dose, which, in turn, can be accelerated or decelerated by varying the dose rate. Because commercially available MLC systems do not allow the MLC delivery sequence to be modified in real time based on the patient's breathing signal, previously proposed tumor-tracking techniques using a MLC cannot be readily implemented in the clinic today. By using a preprogrammed MLC sequence to handle the required motion, the task for real-time control is greatly simplified. We have developed and tested the pre- programmed MLC sequence and the dose-rate regulation algorithm using lung-cancer patients breathing signals. It has been shown that DRRT can track the tumor with an accuracy of less than 2 mm for a latency of the DRRT system of less than 0.35 s. We also have evaluated the usefulness of guided breathing for DRRT. Since DRRT by its very nature can compensate for breathing-period changes, guided breathing was shown to be unnecessary for real-time tracking when using DRRT. Finally, DRRT uses the existing dose-rate control

  8. Target localization and real-time tracking using the Calypso 4D localization system in patients with localized prostate cancer

    SciTech Connect

    Willoughby, Twyla R.; Kupelian, Patrick A. . E-mail: patrick.kupelian@orhs.org; Pouliot, Jean; Shinohara, Katsuto; Aubin, Michelle; Roach, Mack; Skrumeda, Lisa L.; Balter, James M.; Litzenberg, Dale W.; Hadley, Scott W.; Wei, John T.; Sandler, Howard M.

    2006-06-01

    Purpose: The Calypso 4D Localization System is being developed to provide accurate, precise, objective, and continuous target localization during radiotherapy. This study involves the first human use of the system, to evaluate the localization accuracy of this technique compared with radiographic localization and to assess its ability to obtain real-time prostate-motion information. Methods and Materials: Three transponders were implanted in each of 20 patients. Eleven eligible patients of the 20 patients participated in a study arm that compared radiographic triangulated transponder locations to electromagnetically recorded transponder locations. Transponders were tracked for 8-min periods. Results: The implantations were all successful, with no major complications. Intertransponder distances were largely stable. Comparison of the patient localization on the basis of transponder locations as per the Calypso system with the radiographic transponder localization showed an average ({+-}SD) 3D difference of 1.5 {+-} 0.9 mm. Upon tracking during 8 min, 2 of the 11 patients showed significant organ motion (>1 cm), with some motion lasting longer that 1 min. Conclusion: Calypso transponders can be used as magnetic intraprostatic fiducials. Clinical evaluation of this novel 4D nonionizing electromagnetic localization system with transponders indicates a comparable localization accuracy to isocenter (within 2 mm) compared with X-ray localiza0010ti.

  9. Real-time tracking of superparamagnetic nanoparticle self-assembly.

    PubMed

    Siffalovic, P; Majkova, E; Chitu, L; Jergel, M; Luby, S; Capek, I; Satka, A; Timmann, A; Roth, S V

    2008-12-01

    The spontaneous self-assembly process of superparamagnetic nanoparticles in a fast-drying colloidal drop is observed in real time. The grazing-incidence small-angle X-ray scattering (GISAXS) technique is employed for an in situ tracking of the reciprocal space, with a 3 ms delay time between subsequent frames delivered by a new generation of X-ray cameras. A focused synchrotron beam and sophisticated sample oscillations make it possible to relate the dynamic reciprocal to direct space features and to localize the self-assembly. In particular, no nanoparticle ordering is found inside the evaporating drop and near-surface region down to a drop thickness of 90 microm. Scanning through the shrinking drop-contact line indicates the start of self-assembly near the drop three-phase interface, in accord with theoretical predictions. The results obtained have direct implications for establishing the self-assembly process as a routine technological step in the preparation of new nanostructures. PMID:19003821

  10. Study on Sensor Design Technique for Real-Time Robotic Welding Tracking System

    NASA Astrophysics Data System (ADS)

    Liu, C. J.; Li, Y. B.; Zhu, J. G.; Ye, S. H.

    2006-10-01

    Based on visual measurement techniques, the real-time robotic welding tracking system achieves real-time adjustment for robotic welding according to the position and shape changes of a workpiece. In system design, the sensor design technique is so important that its performance directly affects the precision and stability of the tracking system. Through initiative visual measurement technology, a camera unit for real-time sampling is built with multiple-strip structured light and a high-performance CMOS image sensor including 1.3 million pixels; to realize real-time data process and transmission, an image process unit is built with FPGA and DSP. Experiments show that the precision of this sensor reaches 0.3mm, and band rate comes up to 10Mbps, which effectively improves robot welding quality.With the development of advanced manufacturing technology, it becomes an inexorable trend to realize the automatic, flexible and intelligent welding product manufacture. With the advantage of interchangeability and reliability, robotic welding can boost productivity, improve work condition, stabilize and guarantee weld quality, and realize welding automation of the short run products [1]. At present, robotic welding has already become the application trend of automatic welding technology. Traditional welding robots are play-back ones, which cannot adapt environment and weld distortion. Especially in the more and more extensive arc-welding course, the deficiency and limitation of play-back welding technology becomes more prominent because of changeable welding condition. It becomes one of the key technology influencing the development of modern robotic welding technology to eliminate or decrease uncertain influence on quality of welding such as changing welding condition etc [2]. Based on visual measuring principle, this text adopts active visual measuring technology, cooperated with high-speed image process and transmission technology to structure a tracking sensor, to realize

  11. Real-time tracking mitochondrial dynamic remodeling with two-photon phosphorescent iridium (III) complexes.

    PubMed

    Huang, Huaiyi; Yang, Liang; Zhang, Pingyu; Qiu, Kangqiang; Huang, Juanjuan; Chen, Yu; Diao, JiaJie; Liu, Jiankang; Ji, Liangnian; Long, Jiangang; Chao, Hui

    2016-03-01

    Mitochondrial fission and fusion control the shape, size, number, and function of mitochondria in the cells of organisms from yeast to mammals. The disruption of mitochondrial fission and fusion is involved in severe human diseases such as Parkinson's disease, Alzheimer's disease, metabolic diseases, and cancers. Agents that can real-time track the mitochondrial dynamics are of great importance. However, the short excitation wavelengths and rapidly photo-bleaching properties of commercial mitochondrial dyes render them unsuitable for tracking mitochondrial dynamics. Thus, mitochondrial targeting agents that exhibit superior photo-stability under continual light irradiation, deep tissue penetration and at intrinsically high three-dimensional resolutions are urgently needed. Two-photon-excited compounds employ low-energy near-infrared light and have emerged as a non-invasive tool for real-time cell imaging. Here, cyclometalated Ir(III) complexes (Ir1-Ir5) are demonstrated as one- and two-photon phosphorescent probes for the real-time imaging and tracking of mitochondrial fission and fusion. The results indicate that Ir2 is well suited for two-photon phosphorescent tracking of mitochondrial fission and fusion in living cells and in Caenorhabditis elegans (C. elegans). This study provides a practical use for mitochondrial targeting two-photon phosphorescent Ir(III) complexes. PMID:26796044

  12. Real-time robust target tracking in videos via graph-cuts

    NASA Astrophysics Data System (ADS)

    Fishbain, Barak; Hochbaum, Dorit S.; Yang, Yan T.

    2013-02-01

    Video tracking is a fundamental problem in computer vision with many applications. The goal of video tracking is to isolate a target object from its background across a sequence of frames. Tracking is inherently a three dimensional problem in that it incorporates the time dimension. As such, the computational efficiency of video segmentation is a major challenge. In this paper we present a generic and robust graph-theory-based tracking scheme in videos. Unlike previous graph-based tracking methods, the suggested approach treats motion as a pixel's property (like color or position) rather than as consistency constraints (i.e., the location of the object in the current frame is constrained to appear around its location in the previous frame shifted by the estimated motion) and solves the tracking problem optimally (i.e., neither heuristics nor approximations are applied). The suggested scheme is so robust that it allows for incorporating the computationally cheaper MPEG-4 motion estimation schemes. Although block matching techniques generate noisy and coarse motion fields, their use allows faster computation times as broad variety of off-the-shelf software and hardware components that specialize in performing this task are available. The evaluation of the method on standard and non-standard benchmark videos shows that the suggested tracking algorithm can support a fast and accurate video tracking, thus making it amenable to real-time applications.

  13. Verification of the performance accuracy of a real-time skin-dose tracking system for interventional fluoroscopic procedures

    PubMed Central

    Bednarek, Daniel R.; Barbarits, Jeffery; Rana, Vijay K.; Nagaraja, Srikanta P.; Josan, Madhur S.; Rudin, Stephen

    2011-01-01

    A tracking system has been developed to provide real-time feedback of skin dose and dose rate during interventional fluoroscopic procedures. The dose tracking system (DTS) calculates the radiation dose rate to the patient’s skin using the exposure technique parameters and exposure geometry obtained from the x-ray imaging system digital network (Toshiba Infinix) and presents the cumulative results in a color mapping on a 3D graphic of the patient. We performed a number of tests to verify the accuracy of the dose representation of this system. These tests included comparison of system–calculated dose-rate values with ionization-chamber (6 cc PTW) measured values with change in kVp, beam filter, field size, source-to-skin distance and beam angulation. To simulate a cardiac catheterization procedure, the ionization chamber was also placed at various positions on an Alderson Rando torso phantom and the dose agreement compared for a range of projection angles with the heart at isocenter. To assess the accuracy of the dose distribution representation, Gafchromic film (XR-RV3, ISP) was exposed with the beam at different locations. The DTS and film distributions were compared and excellent visual agreement was obtained within the cm-sized surface elements used for the patient graphic. The dose (rate) values agreed within about 10% for the range of variables tested. Correction factors could be applied to obtain even closer agreement since the variable values are known in real-time. The DTS provides skin-dose values and dose mapping with sufficient accuracy for use in monitoring diagnostic and interventional x-ray procedures. PMID:21731400

  14. Verification of the performance accuracy of a real-time skin-dose tracking system for interventional fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Bednarek, Daniel R.; Barbarits, Jeffery; Rana, Vijay K.; Nagaraja, Srikanta P.; Josan, Madhur S.; Rudin, Stephen

    2011-03-01

    A tracking system has been developed to provide real-time feedback of skin dose and dose rate during interventional fluoroscopic procedures. The dose tracking system (DTS) calculates the radiation dose rate to the patient's skin using the exposure technique parameters and exposure geometry obtained from the x-ray imaging system digital network (Toshiba Infinix) and presents the cumulative results in a color mapping on a 3D graphic of the patient. We performed a number of tests to verify the accuracy of the dose representation of this system. These tests included comparison of system-calculated dose-rate values with ionization-chamber (6 cc PTW) measured values with change in kVp, beam filter, field size, source-to-skin distance and beam angulation. To simulate a cardiac catheterization procedure, the ionization chamber was also placed at various positions on an Alderson Rando torso phantom and the dose agreement compared for a range of projection angles with the heart at isocenter. To assess the accuracy of the dose distribution representation, Gafchromic film (XR-RV3, ISP) was exposed with the beam at different locations. The DTS and film distributions were compared and excellent visual agreement was obtained within the cm-sized surface elements used for the patient graphic. The dose (rate) values agreed within about 10% for the range of variables tested. Correction factors could be applied to obtain even closer agreement since the variable values are known in real-time. The DTS provides skin-dose values and dose mapping with sufficient accuracy for use in monitoring diagnostic and interventional x-ray procedures.

  15. Verification of the performance accuracy of a real-time skin-dose tracking system for interventional fluoroscopic procedures.

    PubMed

    Bednarek, Daniel R; Barbarits, Jeffery; Rana, Vijay K; Nagaraja, Srikanta P; Josan, Madhur S; Rudin, Stephen

    2011-02-13

    A tracking system has been developed to provide real-time feedback of skin dose and dose rate during interventional fluoroscopic procedures. The dose tracking system (DTS) calculates the radiation dose rate to the patient's skin using the exposure technique parameters and exposure geometry obtained from the x-ray imaging system digital network (Toshiba Infinix) and presents the cumulative results in a color mapping on a 3D graphic of the patient. We performed a number of tests to verify the accuracy of the dose representation of this system. These tests included comparison of system-calculated dose-rate values with ionization-chamber (6 cc PTW) measured values with change in kVp, beam filter, field size, source-to-skin distance and beam angulation. To simulate a cardiac catheterization procedure, the ionization chamber was also placed at various positions on an Alderson Rando torso phantom and the dose agreement compared for a range of projection angles with the heart at isocenter. To assess the accuracy of the dose distribution representation, Gafchromic film (XR-RV3, ISP) was exposed with the beam at different locations. The DTS and film distributions were compared and excellent visual agreement was obtained within the cm-sized surface elements used for the patient graphic. The dose (rate) values agreed within about 10% for the range of variables tested. Correction factors could be applied to obtain even closer agreement since the variable values are known in real-time. The DTS provides skin-dose values and dose mapping with sufficient accuracy for use in monitoring diagnostic and interventional x-ray procedures. PMID:21731400

  16. A Target Model Construction Algorithm for Robust Real-Time Mean-Shift Tracking

    PubMed Central

    Choi, Yoo-Joo; Kim, Yong-Goo

    2014-01-01

    Mean-shift tracking has gained more interests, nowadays, aided by its feasibility of real-time and reliable tracker implementation. In order to reduce background clutter interference to mean-shift object tracking, this paper proposes a novel indicator function generation method. The proposed method takes advantage of two ‘a priori’ knowledge elements, which are inherent to a kernel support for initializing a target model. Based on the assured background labels, a gradient-based label propagation is performed, resulting in a number of objects differentiated from the background. Then the proposed region growing scheme picks up one largest target object near the center of the kernel support. The grown object region constitutes the proposed indicator function and this allows an exact target model construction for robust mean-shift tracking. Simulation results demonstrate the proposed exact target model could significantly enhance the robustness as well as the accuracy of mean-shift object tracking. PMID:25372619

  17. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network.

    PubMed

    Bukhari, W; Hong, S-M

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient's breathing cycle. The algorithm, named EKF-GPRN(+) , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN(+) prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN(+) implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN(+) . The experimental results show that the EKF-GPRN(+) algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN(+) algorithm can further reduce the prediction error by employing the gating

  18. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit

  19. SU-E-J-240: Development of a Novel 4D MRI Sequence for Real-Time Liver Tumor Tracking During Radiotherapy

    SciTech Connect

    Zhuang, L; Burmeister, J; Ye, Y

    2015-06-15

    Purpose: To develop a Novel 4D MRI Technique that is feasible for realtime liver tumor tracking during radiotherapy. Methods: A volunteer underwent an abdominal 2D fast EPI coronal scan on a 3.0T MRI scanner (Siemens Inc., Germany). An optimal set of parameters was determined based on image quality and scan time. A total of 23 slices were scanned to cover the whole liver in the test scan. For each scan position, the 2D images were retrospectively sorted into multiple phases based on breathing signal extracted from the images. Consequently the 2D slices with same phase numbers were stacked to form one 3D image. Multiple phases of 3D images formed the 4D MRI sequence representing one breathing cycle. Results: The optimal set of scan parameters were: TR= 57ms, TE= 19ms, FOV read= 320mm and flip angle= 30°, which resulted in a total scan time of 14s for 200 frames (FMs) per slice and image resolution of (2.5mm,2.5mm,5.0mm) in three directions. Ten phases of 3D images were generated, each of which had 23 slices. Based on our test scan, only 100FMs were necessary for the phase sorting process which may lower the scan time to 7s/100FMs/slice. For example, only 5 slices/35s are necessary for a 4D MRI scan to cover liver tumor size ≤ 2cm leading to the possibility of tumor trajectory tracking every 35s during treatment. Conclusion: The novel 4D MRI technique we developed can reconstruct a 4D liver MRI sequence representing one breathing cycle (7s/ slice) without an external monitor. This technique can potentially be used for real-time liver tumor tracking during radiotherapy.

  20. Real-time motion compensation for EM bronchoscope tracking with smooth output - ex-vivo validation

    NASA Astrophysics Data System (ADS)

    Reichl, Tobias; Gergel, Ingmar; Menzel, Manuela; Hautmann, Hubert; Wegner, Ingmar; Meinzer, Hans-Peter; Navab, Nassir

    2012-02-01

    Navigated bronchoscopy provides benefits for endoscopists and patients, but accurate tracking information is needed. We present a novel real-time approach for bronchoscope tracking combining electromagnetic (EM) tracking, airway segmentation, and a continuous model of output. We augment a previously published approach by including segmentation information in the tracking optimization instead of image similarity. Thus, the new approach is feasible in real-time. Since the true bronchoscope trajectory is continuous, the output is modeled using splines and the control points are optimized with respect to displacement from EM tracking measurements and spatial relation to segmented airways. Accuracy of the proposed method and its components is evaluated on a ventilated porcine ex-vivo lung with respect to ground truth data acquired from a human expert. We demonstrate the robustness of the output of the proposed method against added artificial noise in the input data. Smoothness in terms of inter-frame distance is shown to remain below 2 mm, even when up to 5 mm of Gaussian noise are added to the input. The approach is shown to be easily extensible to include other measures like image similarity.

  1. A low-cost test-bed for real-time landmark tracking

    NASA Astrophysics Data System (ADS)

    Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher

    2007-04-01

    A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.

  2. Real-time tracking of neuronal network structure using data assimilation

    NASA Astrophysics Data System (ADS)

    Hamilton, Franz; Berry, Tyrus; Peixoto, Nathalia; Sauer, Timothy

    2013-11-01

    A nonlinear data assimilation technique is applied to determine and track effective connections between ensembles of cultured spinal cord neurons measured with multielectrode arrays. The method is statistical, depending only on confidence intervals, and requiring no form of arbitrary thresholding. In addition, the method updates connection strengths sequentially, enabling real-time tracking of nonstationary networks. The ensemble Kalman filter is used with a generic spiking neuron model to estimate connection strengths as well as other system parameters to deal with model mismatch. The method is validated on noisy synthetic data from Hodgkin-Huxley model neurons before being used to find network connections in the neural culture recordings.

  3. A real-time multiple-cell tracking platform for dielectrophoresis (DEP)-based cellular analysis

    NASA Astrophysics Data System (ADS)

    Prasad, Brinda; Du, Shan; Badawy, Wael; Kaler, Karan V. I. S.

    2005-04-01

    There is an increasing demand from biosciences to develop new and efficient techniques to assist in the preparation and analysis of biological samples such as cells in suspension. A dielectrophoresis (DEP)-based characterization and measurement technique on biological cells opens up a broader perspective for early diagnosis of diseases. An efficient real-time multiple-cell tracking platform coupled with DEP to capture and quantify the dynamics of cell motion and obtain cell viability information is presented. The procedure for tracking a single DEP-levitated Canola plant protoplast, using the motion-based segmentation algorithm hierarchical adaptive merge split mesh-based technique (HAMSM) for cell identification, has been enhanced for identifying and tracking multiple cells. The tracking technique relies on the deformation of mesh topology that is generated according to the movement of biological cells in a sequence of images that allows the simultaneous extraction of the biological cell from the image and the associated motion characteristics. Preliminary tests were conducted with yeast cells and then applied to a cancerous cell line subjected to DEP fields. Characteristics, such as cell count, velocity and size, were individually extracted from the tracked results of the cell sample. Tests were limited to eight yeast cells and two cancer cells. A performance analysis to assess tracking accuracy, computational effort and processing time was also conducted. The tracking technique employed on model intact cells in DEP fields proved to be accurate, reliable and robust.

  4. Vision-based real-time obstacle detection and tracking for autonomous vehicle guidance

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Yu, Qian; Wang, Hong; Zhang, Bo

    2002-03-01

    The ability of obstacles detection and tracking is essential for safe visual guidance of autonomous vehicles, especially in urban environments. In this paper, we first overview different plane projective transformation (PPT) based obstacle detection approaches under the planar ground assumption. Then, we give a simple proof of this approach with relative affine, a unified framework that includes the Euclidean, projective and affine frameworks by generalization and specialization. Next, we present a real-time hybrid obstacle detection method, which combined the PPT based method with the region segmentation based method to provide more accurate locations of obstacles. At last, with the vehicle's position information, a Kalman Filter is applied to track obstacles from frame to frame. This method has been tested on THMR-V (Tsinghua Mobile Robot V). Through various experiments we successfully demonstrate its real-time performance, high accuracy, and high robustness.

  5. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    PubMed

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  6. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    PubMed Central

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-01-01

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system. PMID:25856331

  7. Tracking the dynamic seroma cavity using fiducial markers in patients treated with accelerated partial breast irradiation using 3D conformal radiotherapy

    SciTech Connect

    Yue, Ning J.; Haffty, Bruce G.; Goyal, Sharad

    2013-02-15

    Purpose: The purpose of the present study was to perform an analysis of the changes in the dynamic seroma cavity based on fiducial markers in early stage breast cancer patients treated with accelerated partial breast irradiation (APBI) using three-dimensional conformal external beam radiotherapy (3D-CRT). Methods: A prospective, single arm trial was designed to investigate the utility of gold fiducial markers in image guided APBI using 3D-CRT. At the time of lumpectomy, four to six suture-type gold fiducial markers were sutured to the walls of the cavity. Patients were treated with a fractionation scheme consisting of 15 fractions with a fractional dose of 333 cGy. Treatment design and planning followed NSABP/RTOG B-39 guidelines. During radiation treatment, daily kV imaging was performed and the markers were localized and tracked. The change in distance between fiducial markers was analyzed based on the planning CT and daily kV images. Results: Thirty-four patients were simulated at an average of 28 days after surgery, and started the treatment on an average of 39 days after surgery. The average intermarker distance (AiMD) between fiducial markers was strongly correlated to seroma volume. The average reduction in AiMD was 19.1% (range 0.0%-41.4%) and 10.8% (range 0.0%-35.6%) for all the patients between simulation and completion of radiotherapy, and between simulation and beginning of radiotherapy, respectively. The change of AiMD fits an exponential function with a half-life of seroma shrinkage. The average half-life for seroma shrinkage was 15 days. After accounting for the reduction which started to occur after surgery through CT simulation and treatment, radiation was found to have minimal impact on the distance change over the treatment course. Conclusions: Using the marker distance change as a surrogate for seroma volume, it appears that the seroma cavity experiences an exponential reduction in size. The change in seroma size has implications in the size of

  8. WE-A-17A-10: Fast, Automatic and Accurate Catheter Reconstruction in HDR Brachytherapy Using An Electromagnetic 3D Tracking System

    SciTech Connect

    Poulin, E; Racine, E; Beaulieu, L; Binnekamp, D

    2014-06-15

    Purpose: In high dose rate brachytherapy (HDR-B), actual catheter reconstruction protocols are slow and errors prompt. The purpose of this study was to evaluate the accuracy and robustness of an electromagnetic (EM) tracking system for improved catheter reconstruction in HDR-B protocols. Methods: For this proof-of-principle, a total of 10 catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a Philips-design 18G biopsy needle (used as an EM stylet) and the second generation Aurora Planar Field Generator from Northern Digital Inc. The Aurora EM system exploits alternating current technology and generates 3D points at 40 Hz. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical CT system with a resolution of 0.089 mm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, 5 catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 seconds or less. This would imply that for a typical clinical implant of 17 catheters, the total reconstruction time would be less than 3 minutes. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.92 ± 0.37 mm and 1.74 ± 1.39 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be significantly more accurate (unpaired t-test, p < 0.05). A mean difference of less than 0.5 mm was found between successive EM reconstructions. Conclusion: The EM reconstruction was found to be faster, more accurate and more robust than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. We would like to disclose that the equipments, used in this study, is coming from a collaboration with Philips Medical.

  9. Visual real-time detection, recognition and tracking of ground and airborne targets

    NASA Astrophysics Data System (ADS)

    Kovács, Levente; Benedek, Csaba

    2011-03-01

    This paper presents methods and algorithms for real-time visual target detection, recognition and tracking, both in the case of ground-based objects (surveyed from a moving airborne imaging sensor) and flying targets (observed from a ground-based or vehicle mounted sensor). The methods are highly parallelized and partially implemented on GPU, with the goal of real-time speeds even in the case of multiple target observations. Real-time applicability is in focus. The methods use single camera observations, providing a passive and expendable alternative for expensive and/or active sensors. Use cases involve perimeter defense and surveillance situations, where passive detection and observation is a priority (e.g. aerial surveillance of a compound, detection of reconnais